InvestorsHub Logo

F6

Followers 59
Posts 34538
Boards Moderated 2
Alias Born 01/02/2003

F6

Re: F6 post# 173971

Wednesday, 06/20/2012 5:39:27 AM

Wednesday, June 20, 2012 5:39:27 AM

Post# of 480073
An Eye Without an 'I': Justice and the Rise of Automated Surveillance


Hello, human, I'm here to see you.
(MGM)


Automated surveillance allows governments (and others) to data mine the physical world, yet little attention has been paid to the ethics of perpetual recording.

By Ross Andersen
Jun 14 2012, 1:35 PM ET

Over the past decade, video surveillance has exploded [ http://www.6wresearch.com/press-releases/video-surveillance-market.html ]. In many cities, we might as well have drones hovering overhead, given how closely we're being watched, perpetually, by the thousands of cameras perched on buildings. So far, people's inability to watch the millions of hours of video had limited its uses. But video is data and computers are being set to work mining that information on behalf of governments and anyone else who can afford the software. And this kind of automated surveillance is only going to get more sophisticated as a result of new technologies like iris scanners and gait analysis [ http://eprints.soton.ac.uk/266142/ ].

Yet little thought has been given to the ethics of perpetually recording vast swaths of the world. What, exactly, are we getting ourselves into?

The New Aesthetic isn't just a cool art project; machines really are watching us, and they have their own way of seeing; they make mistakes that humans don't. Before automated surveillance reaches a critical mass, we are going to have to think carefully about whether we think its security benefits are worth the human costs it imposes. The ethical issues go beyond just video; think about data surveillance, about algorithms that can mine your financial history or your internet searches for patterns that could suggest you're an aspiring terrorist. You'd want to be sure that a technology like that was accurate.

Fortunately, our British friends are slightly ahead of the curve when it comes to thinking through the dilemmas posed by ubiquitous electronic surveillance. As a result of an interesting and contingent set of historical circumstances, the British now live under the watchful eye of a massive video surveillance system. British philosophers are starting to gaze back at the CCTV cameras watching them, and they're starting to demand that those cameras justify their existence. In a new paper called The Unblinking Eye: The Ethics of Automating Surveillance [ http://www.springerlink.com/content/2j1252667gg02717/ ], philosopher Kevin Macnish argues that the political and cultural costs of excessive surveillance could be so great that we ought to be as hesitant about using it as as we are about warfare. That is to say, we ought to limit automated surveillance to those circumstances where we know it to be extremely effective. I spoke to Macnish about his theory, and about how technology is changing surveillance, for better and for worse.

I was thinking the other day that it's curious that CCTV should have bloomed in Britain, whose population we think of as being less security-crazed than the population of the United States. British is more urban than America, but it can't just be that, can it?

Macnish: One interesting historical point, and I don't think this clarifies the whole thing but it helps, is that most other western countries have a recent history of some form of dictatorship, the US exempted. Most of the Europe was under a dictator or occupied by a dictatorship within the living memory, and so I think there is an awareness there about the dangers of government. It's possible that Britain might be a little bit more laissez-fare about surveillance because we haven't had that level of autocratic control since the 17th century. I think in America, while the history is a little bit different, you have a very strong social consciousness about separation of powers within the state, and between the state and the people. I think there is a general suspicion of the state in America, which we often just don't have in the U.K.

Then you have to couple that with some very powerful images. In 1993 there was an infamous case of a 2 year-old named James Bulger who was kidnapped by two other children who were themselves about 10 or 11. They kidnapped him and then killed him in a very horrible way that mimicked a murder from one of the Child's Play films, which led to a massive reaction against horror films and whatever else. At the time there was a CCTV image taken of the two boys picking up this toddler and walking off with him, while holding his hand. Ironically, the CCTV didn't actually help with solving the case. The police had already heard about the case of these two boys and were already investigating them, but the image came across on our TV screens and came into our newspapers and it was really powerful. That helped to favor people towards CCTV here. It hadn't been thoroughly researched at the time and it was sort of suspected at a common sense level that it would help deter crime, and that it would detect and catch criminals, and that it would be able to help to find lost children. So, the government poured hundreds and millions of pounds into CCTV cameras all around the country and then retailers and businesses bought CCTV cameras for their own security---it just took off. As a sociological study, it's fascinating. A lot of my American friends that come here feel really freaked out by the amount of cameras we have, and with good reason.

What is automated surveillance? Where and how is it most commonly used? I know the Chinese have been developing a kind of gait analysis, a way to identify people on video based on the length and speed of their stride. In what other ways is this technology gathering steam?

Macnish: There are things like iris recognition, there are areas where people are looking at parts of the face for identification purposes; there are all of these ways that you can now automate the recognition of individuals, or the intentions of individuals. You have a ton of research on these capabilities, in the U.S. and China, especially, and as a result these techniques are catching on in a way that they weren't five or ten years ago, when we didn't yet have the technology to implement them. We've had the artificial intelligence capabilities for a while---since the late 70's we've been able to write programs that could recognize when a bag had been left by a particular person in a public place. But we didn't have the camera technology or processing technology to roll it out. Now you have digital cameras, and increased storage and processing capacity, and so you're starting to see these really startling things happening in automated surveillance.

What advantages does automated surveillance have over traditional, human-directed surveillance?

Macnish: The problem with human surveillance is the humans. People get bored; they look away. In many operation centers there will be one person monitoring as many as 50 cameras, and that's not a recipe for accuracy. Science has demonstrated that it's possible for a person to be watching one screen and miss what's happening on it, and so you can imagine watching a busy scene in a mall, and there are 20 people in it, or a field of 50 different screens---you're not going to be able to see what every single person does. You might very well miss the person who puts their bag down and walks off, and that bag might be the one containing the bomb. If you can automate that process, then, in theory, you're removing the weakest link in the chain and you're saving a human being a lot of effort. The other problem with us humans is that we tend to be subject to prejudices. As a result we may focus our attention on people we find attractive, or on people we think are more likely to be terrorists or more likely to be up to no good, and in the mean time we might miss the target we're supposed to be looking for. And this doesn't just happen with terrorists, it can happen with shoplifters too.

On the other hand, we humans have common sense, which is something that computers lack and will probably always lack. For instance, there are computer surveillance programs designed to recognize a person bending down next to a car for a certain period of time, because this is behavior associated with stealing cars. At the moment the processing capacity is such that a computer can recognize a person bending down by a car and staying bent by a car for five seconds, at which point it will send an alert. Now, if a human is watching a person bending down next to a car, they will look to see if they're bending down to pet their dog, or to tie a shoelace, or because they've dropped their keys. The computer isn't going to know that.

In your paper, you describe the way that cultural differences often dictate the way that people move through crowds. For instance, in Saudi Arabia, people walk much slower than they do in London. Another example: in some cultures, people require less personal space than in others. Why are those differences problematic for automated surveillance?

Macnish: The particular automated surveillance I was looking at was designed to measure the distance between people to determine whether or not they were walking together. The theory behind it was that if you and I are walking together through a train station and I put my bag down next to you so that I could go off and get a newspaper or something like that, then clearly the bag is not unattended. This is one of those cases where a human being would instantly recognize that we are walking together and that we are friends, and that the bag isn't a danger, but the computer wouldn't recognize that we were friends. Instead the computer would see an unattended bag and it would send out an alert, and so when I come back from getting my coffee, or my newspaper, I might find you swarmed by security guards, guns drawn. The programmers behind this project were trying to write software that could determine whether two people walking in public are associated with each other in some way, and the way that they did this was to use an algorithm called a "social force model," which looks at how closely people are walking together, how far apart they are, how they interact with nearby objects, and how people walking towards them react to them. Those data points, together, can give you a determination of whether or not people are associated in some way. But problems appear when you consider that different cultural groups have different norms and habits, and that the social and spatial parameters of middle class white guys in the west might be totally different from the social and spatial parameters of two Indian women. There are all these subtle aspects and differences in the way that people from different cultures interact, and there is very little data on how people of different cultures, different sexes, and different ages, walk and act in public. Most of our data is drawn from western middle-class scenarios, things like universities or whatever. It's not the deliberate prejudice that you might see with a camera operator, who might focus on Somalis or Arabs, or some other particular group, but its effects can be just as bad.



Your paper argues for a theory of efficacy, when it comes to surveillance. You seem to say that this can only be ethical if we do it very well.

Macnish: Yes, but it goes deeper than that. My overall project is to argue that the questions that are typically raised in the Just War tradition are the questions that we should be asking about surveillance, in order to see whether or not it (surveillance) is justified. One way of doing that is to question these technologies' chances of success. In Just War theory we have this notion that a war is unethical if you are unlikely to succeed when you enter into it, because it means sending soldiers to die in vain. That was the perspective that I was coming from with the argument about efficacy---if there isn't a considerable chance of success then we shouldn't be pursuing these techniques.

But that rationale, Just War theory, is specific to war and it's specific to war for a very important reason. If we embark on ineffective wars, we run into disastrous consequences with enormous human costs. It's not clear that surveillance ought to have a precautionary principle as strong as the one governing warfare. Why do you think that it should?

Macnish: You have to look at the counterfactual; if we have arbitrary surveillance, which you could argue is what we have in the UK where we have virtually no regulation of CCTV cameras, there is an extent to which you start to wonder why we're being surveilled? Why are we being watched? And the surveillance can have quite an impact on society, it can shape society in ways that that we may not want. If you notice all of this surveillance, and you also notice that it's ineffective, you start to wonder if there's an ulterior motive for it. Heavy surveillance, of which CCTV is only one variety, can create a lot of fear in a population: it creates a sense of vulnerability, a fear of being open to blackmail or other forms of manipulation as a result of what's being recorded by surveillance, and these can, together, create what are typically called chilling effects, where people cease to engage in democratic speech or democratic traditions because they're concerned about what might be discovered about them or said about them. For instance, you might think twice about attending a political demonstration or political meeting if you know you're going to be watched. In the UK, there is a special police unit called FIT (Foreign Intelligence Team) that watches demonstrations, looking for certain trouble makers within political demonstrations---that might dissuade people from going to demonstrate. There is now a response protest group called FIT Watch that is going out to watch the FIT officers who are watching the demonstrators to try meliorate this problem, which is viewed as potentially damaging democratic engagement.

On balance, what about Britain's CCTV System? How does it score in your efficacy framework?

Macnish: I think it probably fails on most counts. I was thinking about this last night. I've been kind of getting into probes and automated warfare more recently. Boeing is currently working on a drone that can stay in the air for five years without refueling. One that can stay up for 4 days was just successfully tested a couple of days ago. Think about a drone flying above you for five years. If you're in occupied Afghanistan, that is going to be very, very intimidating, and it would be just as intimidating if that were happening in our own country, if there were surveillance drones constantly flying above us. That could feel very intimidating. Ultimately, there is very little difference between a drone flying above a city and the sort of CCTV surveillance that we have here all the time. It's just one is more out of the ordinary because we're kind of used to it.

You argue that in some ways automated surveillance is less likely to trigger privacy concerns than manual surveillance. Why is that?

Macnish: Say you are taking a shower and a person walks in while you're in the bathroom. You might feel an invasion of privacy, especially if you don't know that person. If a dog walks in, are you going to feel an invasion of privacy? Probably not. I mean there might be some sense of "hey, I don't want this dog looking at me," but it's only a dog. It might be that being watched by a computer is like being watched by the dog; you aren't entirely comfortable with it, but it's better than a human being, a stranger. Now, if it recorded the images it saw and then allowed a human to see those images, then, yes, that would be an invasion of privacy. If it had some automated process where as a result instead of seeing what you do in private, it took some action, that would likewise be an invasion of privacy. But yes, one benefit of automated surveillance is that it can take the human out of the equation, and that can be a net positive for privacy under certain circumstances.

In your paper you argue for a middle ground between manual surveillance and automated surveillance. What does that ideal middle ground look like in the context of something like the CCTV system in the UK?

Macnish: One reason that I argue for a middle ground goes back to the fact that computers don't have much common sense, which can lead to false positives, as we saw with the unattended bag or the person who drops their keys in a parking garage. A computer could be very helpful for filtering out some obvious false positives, but ideally a human should come in to look at what's left. A computer can provide a good filtering mechanism, for purposes of privacy. For instance, a computer could blur out people's faces, or their entire bodies so that a human operator sees only the action in question. At that point, if the action is still deemed suspicious, the operator can specifically request that the image be un-blurred so he can see who the person is and see how he needs to respond to them.

In the context of automated surveillance, does privatization worry you?

Macnish: That's a really interesting question. I think the privatization of creating the software and the hardware in and of itself doesn't bother necessarily me; what concerns me more is the privatization of the operation of the surveillance. So, privatizing the people who are watching the cameras, privatizing what is done with the information from the cameras---when private companies hold that sort of information, especially if they're not regulated, there are all sorts of abuses that could flow from that. There's a second thing that might be worth saying about that as well, and it ties back in with the Arab Spring. After Mubarak fell, when we went into his secret police headquarters, we found all sorts of British, French and American spying equipment, which people like Boeing and whoever else sold to the Libyans and Egyptians knowing very well what would happen with it. Of course there are companies right now that are either still doing, or recently stopped doing the same, for Syria. I think that's a legitimate concern as well.

Video surveillance like CCTV surveillance is only one kind of automated surveillance; automated data surveillance is another. I'm thinking about intelligence organizations looking for patterns in millions of financial transactions and internet searches. Are there overlaps in the ethical issues presented by data surveillance and camera surveillance?

Macnish: Definitely. The same questions that we're asking about CCTV should be asked about data surveillance. Potentially I think that could be very concerning. And that's not just true of intelligence organizations, but of commercial organizations as well. The New York Times recently ran an article about Target and the lengths it would go to know that a 16 year old girl was pregnant---so much so that they knew before her dad did. Those are the kinds of questions commercial organizations are looking to answer. And you have to ask what they do with that information---are they offering better deals to the sort of customers they would rather have as their clientele? Are they trying to put people off who they would rather not have as their clientele? For instance, frequent fliers get all sorts of deals on their flights because they get frequent fliers that spend a lot of money on the airline. Are you creating a situation where the rich, successful people are the ones that get offered better deals to fly on the planes, whereas poorer people don't get those same offers. The questions raised by big data are very interesting. It's actually a very rich area for research; we haven't even scratched the surface of it.

Copyright © 2012 by The Atlantic Monthly Group

http://www.theatlantic.com/technology/archive/2012/06/an-eye-without-an-i-justice-and-the-rise-of-automated-surveillance/258082/ [with comments]

---

(linked in):

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=76019837 and following

http://investorshub.advfn.com/boards/read_msg.aspx?message_id=76596208 and preceding (and any future following)



Greensburg, KS - 5/4/07

"Eternal vigilance is the price of Liberty."
from John Philpot Curran, Speech
upon the Right of Election, 1790


F6

Join the InvestorsHub Community

Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.