The non-average WoMan in the Street- anarcho-magick and street surveys

Kate Hoolu

I guess we’ve all had them- that un-threatening smile and lilting “could you spare a moment to answer a few questions?” in the street (although this article applies equally to phone surveys and magazine questionnaires). 

There are of course those among us who can simply say ‘NO’ in a firm manner and brush past. A few are even more brisk with the surveyors. A crisp, modulated and authoritative ‘ fuck off’ can also work wonders with the more mature representatives, and give a spring to one’s step for some time afterwards, if executed well. However, someone is paying these people to go out and ask the questions, and by not answering you are robbing yourself of being represented in the sample who are surveyed. 

So what? 

Good point, but it can be fun to be involved, and that is not any kind of community spirit or social responsibility, it can be real liberating magic to have a bit of a fiddle with the answers. So what if you stop to talk to them? What can you do to brighten your day, and make the survey, well- if not more accurate, then more “interesting” for the poor buggers who have to interpret the results? 

Bear in mind the way these things are designed- the survey will not want a cross-section of everyone; that’s way too many variables, and the maths becomes really nasty back in the office. The "hard sciences" such as physics work with a relatively small number of forces (i.e. gravity, pressure, temperature etc) on a small number of materials (solid, liquid or gas). Social sciences work with a much larger scope of "materials" (i.e. 55 million UK citizens of many races, cultures, tastes, ages, outlooks, socio-economic groups and general attitudes in cities, towns, villages) under a wide range of (often unknown) conditions and with many disparate underlying factors. Even the hard sciences have problems in supporting theories; so it would be a lot to expect that street surveying, as sociology or psychology, with its far wider scope, would be able to prove anything conclusively. But they try, and more importantly, governments and big businesses listen to their findings. What they are looking for might be, for example, all the mothers with children out shopping between the hours of 12 and 2pm, to discuss perhaps enforced vaccination of your children with drugs which have not yet been proven safe- and the opening questions will be ones to make sure you fit the client group… so make sure you answer those entirely falsely; and be prepared to think on your feet and lie a great deal, but consistently. 

There was a famous study, shortly after the Bulger killing in the UK (see elsewhere Ramsey’s ‘What I did…’ for more info) about young children who had been watching (quote) video nasties (unquote). By the introduction of spurious titles for some video-nasties in a survey of young children, this survey found that a whopping 68% of the study group admitted to seeing these films. As the films did not exist it implies that the children in the sample were lying.

 This was only one survey of many that showed up wholesale deception- this bias is called experimental effect; when simply by being an experiment the results are not "real" as the participants' behaviour or responses are affected (either adversely or positively) by their knowledge of the artificiality and observation of their activities. Survey designers EXPECT people to lie; especially if the answers may have some bearing on how the respondent is perceived- for example; ‘the average man’ in surveys says that he has sex a lot more often than ‘the average woman’ in surveys. Excluding the possibility of a lot of men going with one very tired woman it’s easy to see what’s going on here; blokes want to be seen as ‘real men’, and thus inflate the number of times they say they have sex (and dick size, pints they drink, how fast or good a driver they are, etc etc). It works the other way around too- people say they smoke far less in surveys than would seem to be the case based on tobacco sales.

Once it has been established that lies have been told in one part of a survey it is doubtful whether any of the results gained can be regarded as of any use; repeat surveys may only serve to verify that the (dis)honesty of the sample group stays approximately the same, and no useful data on the original subject matter may arise. That’s where you come in! Be a random factor in a survey, and hope that several others will also be: result? A lot of mischief, and a lot of head-scratching going on back at base- and maybe even a long delay in crap policy decisions being made, and implemented. 

No questions are asked in a vacuum. Remember that the survey in front of you will be the result of considerable design and evaluation; it’s not (usually) something they just knocked up in the office that morning. Questions should have been thoroughly analysed and (if possible) been framed unequivocally. So, your first task is to find an answer the surveyor will NOT have a tick box for… if it’s not got a tick box it can’t be scored, and so can’t be averaged out….. get the idea? 

There is an often an implicit bias arising from the conditioning and norms of the researcher; for example one hypothetical team may define poverty in a completely different way to another team in a nearby city. Both teams produce "poverty statistics" for their cities, and it seems outwardly that one city has more of a problem than the other; when often the defined terms of measure of poverty (or any other cultural state) can be so disparate as to render any comparisons impossible. Look for bias in the questions or terms/definitions, and remark upon that to the surveyor. Make sure they note it down. 

Watch out for closed questions that imply a point of view that you don’t hold- don’t be afraid to backtrack on these: an extreme example (but one that appeared in a tabloid UK newspaper in the 90s) is on the method of capital punishment (hanging, electrocution or lethal injection) being reintroduced for murder of children. This doesn’t allow a response of not reintroducing capital punishment. This also gives no definition of murder, or children = by the way- how old is a child? Is it mental or physical age; perhaps a murder of a body of 25 with a mental age of 10 would count as a child murder? 

There is a similar problem with rape. "Rape" has to be defined accurately; the offence of male rape is still a very grey area; in law at present only women can be raped, anything else is (legally) "sexual assault". Extreme cases of sexual harassment are also legally ‘sexual assault’ (psychological assault, etc included). So in theory the same sentence could be passed for sexual harassment as for male rape. Try dropping that one into remarks to any questions on the law in a survey… they will NOT have a tick box for that one. Other methods of scoring are some kind of one to five, or one to seven point scales. People tend not to respond at extreme ends of the scale. So do that- give a lot of extreme responses (either 1, or 5- or 7 or whatever the top-end score is) and try to look like you mean it when you give those answers! Bear in mind that at least one of the scales will be a ‘catch question’ – a reverse phrasing of a question that has already been asked, so if you answered “high” to the first question you need to answer “low” to the other one (and vice versa), to remain coherent. 

Placement and timing of the survey is important; in print it is easy to place something to garner the requisite responses to suit any particular agenda, say a questionnaire about criminal sentencing printed next to a story of a copper being killed by crooks, or a pensioner being mugged. Definitely a skewed response from that one. On the street it’s more about timing- you will get more answers about fear of crime if you take questionnaires at times when perceived vulnerable groups are out on the street- so ‘pensioner’s days’ at shopping centres (traditionally a Thursday, when the pensions were handed out, nowadays a bit harder to work out which days, but it can be done). Alternately, the timing element can be during a media-reported crime wave…. See the techniques at work here? 

“Scientists demonstrate the truth of a hypothesis by repeated experimentation under relatively fixed (or known-variable) conditions in an attempt to show a recurring theme or result which agrees with their initial hypothesis.” The secret of getting good and useful answers is to ask the questions properly; which might require an expert linguist to work with the other researchers; and several smaller test studies in order to fine-tune the experiment. If you are lucky enough to be stopped by someone running a pilot study your interference might set the work back YEARS. 

Many theories, especially in sociology and psychology are framed from a position of bias (unconscious or otherwise) and there is a tendency for the results achieved from experiments and surveys to reflect as much the inherent bias as any "objective reality". In addition, there are cases where results are spurious due either to unpredictable variables; or poor understanding of associated factors that may invalidate the initial theory or actual method of research. For example a study in 1971 of mental illness in ethnic groups appeared to show very high rates for Puerto Ricans compared to other groups; whereas in "reality" the associated factor of low social stigma in admitting to symptoms of mental illness in the New York Puerto Rican community (which was unknown to the researchers) all but invalidated the result, as it only shows that Puerto Ricans are more open about admitting their mental health symptoms, which was already known in that society (but unfortunately not to the researchers) and hence a waste of time (and money). 

So what’s the point of doing all of this? Some childish mischief? Not totally. For sure, it can be fun to mess around with these surveys. Something to brighten a dull day, anyhow. What it also achieves is a blow for the individual. Surveys and data collection are an attempt to standardise the world; so that we can have a standard-sized and apportioned world, suited for the average. "Science" is another social construct to impose an authority upon certain pieces of information and not upon others; regardless of whether it WORKS or not...for example the "science" of aerodynamics precludes the ability of flight in a bumblebee. One big danger of a bad but "plausible" survey is that by the spurious ‘scientific’ results being loudly trumpeted by the media or government for long enough it might become "truth" by influencing social norms or official policy. By contributing to messing up the data you save yourself, and others. There is considerable scope for sharp practice, meeting the result-led needs of the body funding the research, cheating, and fabrication of positive results and selection only of favourable results in research. The scientists cheat all the time, so why shouldn’t the respondents?

If you are not average, and if you want to be an individual, its time to go to work! If you don’t want to be indexed, filed and treated like a number, like The Prisonerthen get out there and give meaning to the phrase attributed to the late Sid Vicious: “Yeah, I’ve met the man in the street, and he’s a C**t”.

KH