Skip to main content
SearchLoginLogin or Signup

Cons, Constructions Misconceptions of Computer Related Crime: From a Digital Syntax to a Social Semantics

This is the online version of the article. To access a print version with page numbers for citation and reference purposes, select "Download" to the right and then choose "Formatted PDF."

Published onAug 01, 2017
Cons, Constructions Misconceptions of Computer Related Crime: From a Digital Syntax to a Social Semantics
·

Abstract

Has the framing of computer crime been a process which has, in effect, left us all framed? What is it that we think that we understand when we use terms like “internet crime,” “cybercrime,” or “technocrime,” and in what sense does this understanding constitute knowledge? In particular, the kind of knowledge which can be defined as “social scientific?” In this paper, I apply one of the key distinctions used to define computational processes—that made between a syntax and a semantics—to illustrate some of the problems that have affected our thinking about cybercrime and undermined our responses to it. I argue that the construction of cybercrime in terms of syntactic rather than semantic considerations has fostered the myth that it is a technical crime requiring technical solutions. Worse, by emphasizing cybercrime’s machinic over its human origins, syntactic interpretations have inflated its risks and directly contributed to the ‘culture of fear’ surrounding cybercrime. Drawing upon qualitative analytic techniques such as thematic visualization, I outline the need for a more sociocultural account of the origins of cybercriminality, one that might not only help stem the increasingly counterproductive influences of the cybersecurity industry, but can also contribute to more effective ways of containing it.

Introduction

It is now accepted as a given that there is a “crisis” around the use and misuse of our most significant contemporary technology—the information and communication systems that pervade every aspect of everyday life. At its worst, this is a crisis which has been regarded as potentially catastrophic (Noik, 2011). Back in 2008, I surveyed the emerging criminal landscape around the internet and information technology (McGuire, 2008) and found a number of recurring themes as to how this problem was being framed. Specifically, there was an abject failure to properly appreciate that problems rooted in social interaction are ultimately problems of social interaction rather than how those interactions are mediated. In cybercrime, the medium remains very much only part of the message.

In this paper, I will examine this contention in relation to a distinction that has received little or no attention in the context of cybercrime—that made between a syntax and a semantics. Though the distinction is one that has traditionally been explored more within standard linguistics and the philosophy of language (see for example Lycan, 2000), it has acquired obvious significance within computer science given the centrality of programming languages in this field. Although there are differences in the way the distinction is applied across these contexts (cf. Leonhardt & Röttger, 2006), the general idea (roughly) is to say that a syntax involves grammar—the rules which determine the correct use of symbols and their combinations in any language while a semantics determines what a syntactically correct sequence of symbols mean (Anderson, 2009). For example, in English, the syntactic rules say that “the cat is on the mat” is a well-formed sentence, but “cat mat on is the” is not. Semantically, the correct syntactic formation allows us to understand that a cat is on a mat (as opposed for example to a mat being on a cat). In computational terms, syntax has often been granted a more elevated status than semantics because effectively “what we call computation takes place on the level of syntax. It is a purely formal procedure taking place in a physical mechanism” (Müller, 2008, p. 222). Put more simply, it is because the syntactic states of a program can be linked to the physical states of a machine that computation becomes possible. As a result, syntax appears to possess causal as well as grammatical significance—serving to mediate relations between physical states and abstract symbols. In this sense, syntax determines whether a machine works at all, or whether its program results in malfunctions (for example, similar to those described by the Halting Problem (Parkes, 2002).1 Thus, in a programming language like C++, the syntactic symbol “;” acts as a statement terminator, thereby allowing “x=y”; “y=z+1” to be treated as different statements rather than parts of the same command. In doing so, this simple syntactic symbol and the rules which govern its use have profound effects on the functioning of any C++ program.

A number of examples demonstrate the way syntactic responses to cybercrime have been favored over more complex, semantic measures. For example, we all know that malware infections can be strongly related to human factors like the intention to do harm, failures and errors in taking simple security precautions or emotions like greed, curiosity, lust, fear, and so on which drive individuals to click on links they should not. Yet, according to the UK National Cybersecurity Centre (2016), protections against malware infections center largely upon end-user device protection, antivirus and malicious code checking solutions, content filtering capability on all external gateways, installing firewalls, disabling certain browser plugins or scripting languages, disabling a device’s autorun function, or “ensuring systems and components are well configured according to the secure baseline build” (NCSC, 2017). Even offenses like phishing—which depend heavily upon a successful social engineering of meanings between perpetrator and victim—have often been thought to be best addressed by syntactically driven measures such as Secure Connections (HTTPS), Secure Login Features, Web Browser Features and Settings, Email Client Configuration, SPAM Filters or Alternative Transaction Verification Channels (cf. Infosec, 2017).

I argue that reflecting upon the elevated status of syntax and the way it is distinguished from semantics offers a more precise way of revisiting the familiar debates about cybercrime as a technical crime or one driven more by human factors (Leukfeldt, 2017). Not only is the latter term highly vague (what is and what is not a human factor?), it is also clear that human interactions with information technology are fundamentally dependent upon the meanings and interpretations involved. In other words, the relevant semantics. Similarly, syntax is well understood, in both conceptual and operational terms, while ‘technical’ is not. In turn, invoking the syntax/semantics divide not only has value in clarifying some of the governing perceptions around cybercrime and the mythologies which have developed around it, but it also helps challenge some of current assumptions about the kind of methodology best suited to developing appropriate, and actionable knowledge about cybercrime. In this paper, I will review three kinds of clarifications which the application of this distinction can offer and the need this implies for a more powerful hermeneutic toolbox than has been applied to cybercrime to date. 

First, one of the most common taken-for-granted assumptions underlying standard conceptions of cybercrime—that it is a novel crime because it is a technical crime—becomes far less self-evident when the underlying association with syntactic criminality is made more explicit. Given that syntax is effectively a synonym for “digital code”, it becomes obvious why actions like malware distribution and DDoS attacks are not just perceived as novel crimes, but also the most typically “cyber” of cybercrimes. The distinctive character of cybercrime is based on the fact that it can be generated by code and driven by the algorithms which depend upon code (Lessig, 1999), which is really to say, syntax. This also explains why a further popular prejudice has also developed that it is only by way of other species of code/syntactic tools that such criminalities can be tackled.

A second benefit of reflecting upon the syntax-semantic distinction lies in what it tells us about the cultures of alarm and fear which have developed around cybercrime (Wall, 2008). The idea that there is some fundamental discontinuity between semantics and syntax has been widely debated within computational philosophy (Searle, 1999; Stich, 1983), so it is not surprising that discussions of cybercrime have also assumed that syntactically driven computational states somehow stand “outside” the rich, complex world of the human-semantic. And given this, the further assumption that the behavior of algorithms represents something alien or other follows naturally enough. From there, it is a short step to the tacit belief that syntactic machines are something to be feared as much as they are to be admired.

Finally, and crucially to what follows, by unpacking the syntax-semantic distinction, we can begin to make sense of one of the most troubling aspects of the current thinking around cybercrime—why proper critical discussion of the phenomenon has been so limited. A key feature of any formally correct syntactic language system is the property of “completeness,” the requirement that all truths within the language can be proven by correct application of the rules and symbols of the language (cf. Hackstaff, 1966). This implies that nothing external to these rules is required to secure truth. It is striking (but telling) how often discussions of cybercrime have tended to mirror this line of thinking, for if knowledge of syntax and its algorithmic outputs suffice to determine the truth about such offending, what need is there for alternative perspectives? Seen in this light, a syntactic view of cybercriminality appears disturbingly close to an article of religious faith because it encourages us to view cybercrime as a phenomenon which we may be able to describe in various ways, but interpret only in terms of a single way.

This kind of doctrinaire faith in the all-encompassing explanatory power of syntactic/algorithmic interpretations of cybercrime has been enormously damaging, not just too effective critical analysis of the phenomenon, but also to the kinds of responses to it which have (so far) been developed. In what follows, I will begin by setting out a provisional genealogy of how our perceptions of cybercrime have developed. By winding the clock back to some of the circumstances surrounding the origins of this variety of offending, some corrections to the syntactic interpretation will be outlined. This will set the scene for a more critical interrogation of the way certain foundational assumptions, in particular those involving the syntax-semantics distinction, continue to impede effective understanding. As a result, it will also suggest an outline of some richer methodological approaches to the cybercrime problem.

1991: Genealogy, origins & influence—A thematic visualization

One immediate, but surprisingly underused framework for developing a richer, more semantically focused approach to cybercrime is by way of its genealogy. This might involve the socio-technic trajectories of information technology crime over time and how, in turn, our perceptions of this evolution have been shaped by various cultural and ethnographic influences. There are many ways in which a genealogical method for cybercrime could be delineated (Bowman, 2007 Anaïs, 2013), but I will use thematic visualization as one such approach (Tufte, 2006). Thematic visualization involves visual representations of overtly qualitative data which complements the increasing utility of (quantitative) data visualization methods (Banks, 2001). I will use it here to highlight the convergence of a series of (ostensibly) unrelated events in 1991a year which arguably represents a crucial juncture in the development of cybercrime. Specifically, given that it was in this year that the world-wide-web first became active, 1991 could reasonably be characterized as the “ur-year” for cybercrime—the year when it properly ‘began’.2 By interweaving key events which impacted the origins of our thinking about cybercrime with the technical origins of cybercrime itself, a thematic visualization of 1991 can situate these early technical developments within a wider field of cultural influences. In turn, the changing perceptions of our newly connected world and the various pros and cons we have attributed to this can be revealed in more granular detail. An example of this kind of visualization is seen in Figure 1 (below).


Figure 1. 1991 and the origins of cybercrime: A thematic visualization


It is important to stress that the events detailed in this particular visualization are by no means exhaustive as other kinds of indicator events are possible. What they do suggest however is that there are many more ways of interpreting the subsequent development of cybercriminality than simply pointing to changes in information technology or to the oft-repeated construct of an “arms race” between cybercriminals and law enforcement agencies.

More details of each of these themes and some of their implications for how we think about cybercrime are examined in Table 1. It is striking how, by utilizing just six thematic indicators, we can begin to look beyond the more familiar “technical-syntactic” events (shaded in blue) behind cybercrime. Instead, a more complex, more semantically oriented toolbox begins to emerge, one which allows us to excavate a far wider range of influences (shaded in grey) which have informed our perceptions of this variety of crime.


Theme 1. Developing the thematic visualization

Theme

1991 event

Thematic

Cyber implications

E1

The filming of the Lawnmower Man

Featuring a computer animated journey into virtual space, the film was seminal in the gradual reinforcing of associations between the newly forged web and the fantasy of a non-physical alternative reality. This was one of several cultural productions (together with “cyberpunk” books like Neal Stephenson’s Snowcrash3), which appeared to indicate how William Gibson’s earlier idea of a “cyberspace” had become a material fact.

Enhancing the sense that a new frontier of boundless possibility had opened up where anything (legitimate or illegitimate) could now occur.

E2

Creation of the SFNet Coffee House Network in San Francisco

One of the key precursors of the “cyber-cafe" phenomenon (Bishop, 1992). The SFNet (which reified earlier online communities like the Well) was followed by several more developed examples such as Cyberia in London, which provided an early sense of the impacts of the web upon social interaction.

Highlighting the possibility that digital social connectivity might not only sponsor new varieties of digital community, but new ways of judging conduct and assigning blame. Online hate and trolling are among the results.

E3

‘Rodney King’ incident

Images of the beating of Rodney King by LA police, recorded (in those pre-mobile cam days) on video-tape by a passer-by, were rapidly disseminated across the world. A key moment in the development of instantaneous witness and the “all-at-onceness” of contemporary life.

Anticipating the significant power of digital media to generate viral news stories and to create the sense that criminal accountability might both be far more universal while also becoming more subject to ‘spin’ and distortion.

E4

First Gulf War

As semi-automated, remotely controlled cruise missiles rained down upon Iraqi cities, viewers were able— for the first time—to tune in live to war via the new 24 hour news stations like CNN. With this, the idea developed of sanitized, “safe” war driven by the power of syntax/code to ensure maximum force with minimum casualties.

Indicating how the imagery of war would become blurred with the imagery of computer gaming and recycled as mass entertainment. The result would be a virtualization of destruction, where the borders between slaughter and spectacle were no longer clear.

E5

World's first GSM (2G digital mobile phone) call

In a curious coincidence with the origins of the web, this year saw the first successful demonstration of GSM (Global System for Mobile Communications), the (2G) protocols which set the first accepted standard for mobile communications networks. GSM now has over 90% market share.

Presaging a utopian future of seamless digital connectivity.

E6

Publication of “Safe Computing in the Information Age” (CSTB 1991)

One of the first significantly alarmist computer crime assessments, this report anticipated many of the key assumptions around how computers would come to dominate the crime landscape, from new varieties of digital theft through to the advent of cyber-terrorism.

Creating the foundations of new certainty that the advent of connected computers means the advent of wholly new kinds of crimes waves.


By applying this (still relatively limited) toolbox, it becomes possible to make far more complex inferences about the development of cybercrime as opposed to recording exploits of gaps in Java or observing new variants in malware types. Consequently, by further combining these thematic indicators, wider inferences become possible. For example:

  • E2 + E3 suggest how the link between spatio-temporally extended communities fostered by cybercafés, online fora, bulletin boards, and the globally circulated footage of the Rodney King beating contributed to the genesis of the “synopticon” (Mathieson, 1997)—and with that, the phenomenon of mass witness and instant accountability on the part of globalized audiences. The failure of this prototypical “citizen journalism” (Allen & Thorsen, 2009) to bring justice (all the officers filmed beating King were acquitted) predicted a further, darker side to the new digital society—a world where social media becomes so blurred with fake news that police officers captured on film in the act of shooting African American citizens are able to escape any criminal consequences.

  • E1 + E6 suggest how the eerily prescient claims about the potential of online fraud and terrorism contained in the Computer Science and Telecommunications Board (CSTB) report were quickly linked to the idea of a cyberspace. Not only does this evidence how far back many of the now familiar assumptions about the riskiness of the internet can be traced, but it also suggests how dissonances between utopian/dystopian perceptions of the virtual have contributed to the idea of a boundless, endlessly rising, or continually changing crime-type.

Cons—Two foundational myths

Thematic visualization offers only one among many other more “polymorphic” approaches to decoding the genealogy of discourses around cybercrime. Another potential approach involves textual analysis of how the terminology used to define perceived risks of digital connectivity evolved. For example, data-analysis tools such as Google’s Ngram, which enables users to search for the frequency of terms within over 24 million published sources (Ophir, 2016), can help provide fascinating insights into our changing perceptions of the cyber-world. Even a cursory examination of the period from 1991-2000 highlights how, as the frequency of terms like “cybercrime” or “cyber threats” increase, the use of more positive terms like “cyberrights” gradually declines.4

What is striking when triangulating insights about cybercriminality on the basis of techniques like thematic visualization, textual analysis and so on, is just how profoundly conflicted our perceptions of digital connectivity were from the very outset. Two broad trends in our thinking about the “cyber” world and its benefits and harms soon become apparent; trends which now approach the status of foundational myths about online interaction and deviance. Since these myths remain central influences upon the contemporary understanding of digital technology and its criminal potentials, it is worth spelling them out in a little more detail. The first myth—which we can call “CON 1”—usually went something like this:

  • CON 1: Internet connectivity and online interaction offer one of the most significant social shifts in human history. Not only does it provide opportunities for radical improvements to life, enhancements to rights and unbounded freedoms, but also access to a wholly new kind of reality.

In hindsight, it is easy now to see how unrealistic CON 1 was and how it reflected the overly optimistic sense of faith in the magic of virtual space and its capacity to stand outside traditional structures of governance. In other words, CON1 offers a clear manifestation of what has been called “digital utopianism” (cf. Turner, 2006, Dickel & Schrape, 2017), the excessively idealistic perception of new digital technologies common within sources of this early period (cf. Barlow, 1996, Levy, 1984, Rheingold, 1993).5 As ever however, our often ambiguous perceptions of technology meant that such optimism was quickly tempered (Winner, 1997). Instead, the kinds of latent suspicions seen in E6 above fostered a growing belief that digital technology was more likely to harm society—especially by way of the new criminal opportunities it seemingly offered. Early texts such as Parker’s (1976) Crime by Computer had proposed the conceptual possibility of computer crime, but the limited connectivity available at this time meant that by the onset of the 1980s, less than a thousand computer crimes had been recorded and many of these involved nothing more than the theft of a computer. Thus, the stark warnings about the criminal risks of information technology drawn out in the earlier thematic visualization were only a kind of prologue to a far more negative mindset which began to coalesce. Very quickly, the image of the hacker was recalibrated from romanticized technical genius to malevolent criminal mastermind (Steinmetz, 2016; Sterling, 1992); the media became increasingly obsessed with the internet as a site for sexual risk and shadowy predators, and an ever more insistent framing of online criminal activity in terms of striking, staggering, or exponential rises begin to typify coverage of cybercrime within the key sources of the time (McGuire, 2008). And, as the realities of all pervasive digital surveillance began to undermine the idea of cyberspace as a liberated (and liberating) space, a far more skeptical body of literature around internet life and culture began to develop (see amongst many examples, Carr, 2010; Margolis & Resnik, 2000; Morozov, 2011; Resnik, 1998). The scene was set for a catastrophization of online interaction, and with this, a second foundational myth about online activities—one now almost entirely inverse to the perceptions reflected in CON 1.

  • CON 2: Internet connectivity and online interaction constitute one of the greatest dangers ever posed to society, threatening a world of increasing risk, criminality, and/or social control.

Thinking in terms of CON2 did not just come to dominate the public imagination about cybercrime, it soon acquired a particular cachet within governance, criminal justice and media circles. Cyberspace was transformed from a space of exhilarating possibility into an unregulated, anarchic space—with the image of a digital “Wild-West” now serving as one of the recurring metaphors used to characterize it (Morris, 1998; Yen, 2002).

This catastrophic reinvention of cyberspace, which more varied methodological approaches can help tease out more clearly, remains integral to contemporary interpretations of cybercrime. In particular, it does much to explain why cybercrime has now become a kind of catch all explanation for almost every kind of criminal wrong. For example, even one of the best evidenced and most striking of longitudinal criminological trends—the ongoing fall in crime rates referred to as the “crime drop” (Farrell, 2013, Matthews, 2016, Tcherni et al., 2016) has now been brought into question when seen through the cybercrime lens. That is, rather than accepting more economical explanations for the crime drop—that it is a product of superior crime control, changes in the economic background, or simply part of a longer-term cyclical shift—there is now a suggestion that this was really a kind of fiction all along—a criminological equivalent of fake news. Rather, crime rates have been steadily rising all the time because of the explosive (though—of course—largely undetected) rises in cyber offenses (see Fitzgerald, 2014). Sweeping methodological changes imposed upon well-established crime metrics such as the England and Wales crime survey in order to detect and represent this hidden crime or to demonstrate how falling crime rates are offset by rises in cybercrime are among the many results of this shift in perceptions.

The fact that both CON1 and CON2 manifest such simplistic “binaristic” views of digital technology (i.e., internet=“good” or internet=“bad”) may be more than mere coincidence. Specifically, the influence of syntactic views of cybercrime suggest that such interpretations inherit a kind of machinic perspective—one where the opposing realities form a parallel emotional syntax—a “Boolean logic” of despair and fear. Within such an alphabet, hyperreal polarities like utopian/dystopian or liberating/enslaving act together with catastrophic binaries such bad/disastrous, serious/very serious, or out of control/beyond any control, to determine the very foundations of our thinking about cybercrime. In turn, the contradictory logics behind CON 1 & 2 and the growth in cyber-hysteria in the early to mid-phase of cybercrime development, are evidence for the contribution of syntactic views to the “cultures of fear” associated with cybercrime (Wall, 2008). Syntax is central to such fears for it merges neatly with a familiar cultural nightmare—the dread of man-made monsters, creatures we create, but which eventually act autonomously—i.e., outside of human jurisdiction. Such fears are deeply rooted within all human cultures and can be evidenced in various folk-nightmares such the Golem, Frankenstein (Curran, 2010), or more recently, the Skynet (King, 2017). In the cyber context, it is precisely because of the incomprehensible language of syntax which drives our digital machines that we see outcomes beyond our control. For though they are (ostensibly) mediated by human agents, machine behaviors generated as they are by the cold logic of syntax, are not just “otherly”, but alien.

CONstructions—The novelty of technical crime?

No matter how pervasive the culture of fear produced by the syntactic engines which drive cybercrime, such feelings could not have been sustained for long without more concrete and credible rationales. Key to such rationales has been a second line of thought which the thematic analysis suggested, and which is clearly discernible within CON1. This is the sense that “cyber” presents us with a form of criminal action and agency that is wholly unlike anything previously witnessed. Criminologists have often failed to point out the basic implausibility of this conclusion. The number of ways in which humans can harm other humans is ultimately rather limited—so genuinely novel harms are therefore rare. It is also clear that technological advances have regularly been associated with new kinds of crime or harm—whether these involve the increase in casualties following the introduction of gunpowder, the surge in intellectual property crime which arose with new printing technologies, worries about new risks posed by railway, automobile and other transport technologies, or the concerns about gambling and prostitution which arose with the development of the telephone (for these and other examples see McGuire, 2016b).

Why then has it been assumed that the information technology revolution has not followed the criminogenic template seen with these previous technological shifts and has instead spawned wholly new kinds of crime altogether? Here we see a second reason why drilling down into the distinction between syntactic and semantic views of cybercrime is valuable. That is, this perception of novelty is very much founded upon the use of syntactic devices like viruses and malware so that the uniqueness of cybercrime is not secured by its technical basis per se, but by syntax. Moreover, since code can play a causal role in such crime—often the primary causal role—computational crime does not appear to depend upon human agency to quite the same degree as traditional crime. Indeed, so different is this (technical) crime that the syntax which drives it seems as indifferent to its own well-being as it is to that of others. Particularly, as we know, not only can there be crimes of the machine, there can be crimes against the machine (Wall, 2005) (as for example where a DDos attack brings down a system or a network). The deference to syntax as the defining characteristic of cybercrime can be seen in the various attempts to characterize it. For example, malware and other code based forms of offending have sometimes been thought of as “pure” cybercrimes (Wall, 2004), just as the distinction often made between computer dependent and computer enabled crime (McGuire & Dowling, 2012) is really fundamentally a distinction between crime driven by syntax and more traditional crime types, which may have been syntactically augmented but which do not require it for their commission.

The influence of syntax in persuading us that cybercrime is best perceived as a novel (because technical) kind of offense has also been central in persuading us that the problem of regulating and responding to cybercrime represents something equally new. For when constructed as a problem of technical management cybercrime has appeared to confront criminal justice agencies with major, if not insurmountable challenges. Claims that police are too poorly equipped, undertrained, or lacking in technical skills for dealing with this range of offenses (HMIC, 2014; Leyden, 2001; Wall & Williams, 2013,) are familiar complaints, and similar concerns have been raised about how fit for purpose our legal systems are to cope with the transformation of crime into syntax. The view that legal practitioners cannot understand how to conduct cases requiring digital evidence has been suggested (see Brenner, 2012; Graff, 2016), but more serious consequences which may threaten the very foundation of legal process have also been recommended. Complications around digital evidence (e.g. difficulties of retrieval or suspicions of manipulation); an increasing dependence upon expert witnesses rather than legal professionals; or the recurring problem of transjurisdictionality, where the reach of domestic jurisdictions is limited by the capacity of cybercriminals to commit offenses abroad are all among the problems regularly identified here (McGuire, 2017).

But how sustainable is the idea that cybercrime poses such potentially destructive challenges to policing and the law? Do such difficulties really represent the kind of tipping point for policing and for criminal justice that a syntactic view of cybercrime suggests? The fact is that the history of policing has always been one marked by continual technological change and adaptation, from police whistles to the patrol car, and as such, policing is already a technical social institution (Bain, 2017). There has also been a string of successful policing operations in dealing with cybercrimes, such as the recent closure of the Silk Road dark web drugs market (Zetter, 2013), the apprehension of a Ukrainian broker behind the BTCe bitcoin money laundering scam (Gibbs, 2017), or various shutdowns of major botnets such Ramnit (Fox-Brewster, 2015). All of which suggests that law enforcement agencies are far from powerless when confronted with syntax driven criminality. It is equally clear that qualms about the “fit-for-purposeness” of the legal response to cybercrime may also be premature. There is no in principle difference between how the law prosecutes a cybercriminal and more traditional offenders given that in both cases, potential culprits must be identified and appropriate evidence gathered, which is then presented to neutral arbiters. There is also a long history of ways in which different technologies have been policed and legally managed. From the printing press to the motor car and beyond, new legal structures have invariably evolved to deal with new technologies. And such adaptations have usually occurred without the sense of crisis which now appears to confound attempts to enforce cybercrime legislation (McGuire, 2016b).

The construction of cybercrime as a wholly novel (because syntactic) kind of offense is thus open to a number of critical challenges. So too is the assumption that cybercrime is best defined in terms of its technical-syntactic nature. Take for example the idea that computer-dependent crime is a valid way of distinguishing the “real” cybercrimes from those which are merely “computer-enabled” or “assisted”. On closer inspection, this distinction is not so easy to sustain. For example, though it is true that computer-enabled crimes (like fraud or theft) appear to be independent of syntax in that such offenses can also be enacted without computational support, it is also true that they are significant precisely because of the way that computers increase their scale, range, and force—properties which are of course wholly syntax dependent. Questions about the salience of such distinctions emerge with particular resonance in the legal context. Though laws like the U.K. Computer Misuse Act or the U.S. Computer Fraud and Abuse Act have created offenses around computer misuse, there is no significant difference in the legal principles driving prosecution of computer dependent or enabled offenses. That is, whether a prosecution involves defining examples of technical/syntactic criminality like malware creation or more simple offenses like the dissemination of a phishing email scam, convictions can only be obtained on the basis of a mens rea an intention to do wrong. If, then, the law must treat cyber dependent crime like any other kind of crime—as a fusion between the actual event (the actus reus) and a perpetrators intention to do wrong (the mens rea)—where then does this leave any substantive idea of criminal novelty?

Definitional problems of this kind echo those found in the broader literature around the syntax-semantics distinction. Here we see a range of difficulties in trying to demonstrate that syntax is definitively distinct from any semantics or that a semantics “comes out of a syntax”. Searle’s classic “Chinese room” argument can be considered as one example of the problems here (Searle, 1999). This thought experiment asks us to imagine someone in a sealed room who is shown cards inscribed with Chinese symbols. While they do not know the meaning of the symbols, they do know the rules (i.e., the syntax) which govern their use—hat is, what kinds of symbol can legitimately follow other symbols. Thus, their responses to questions or communications should, in principle, be indistinguishable from a native speaker. However, this cannot be taken to mean that they understand Chinese, only that they can follow rules correctly. Searle’s thought experiment was specifically designed to demonstrate that semantic facts like intelligence cannot be solely determined by a syntax. Specifically, even where every syntactic rule is being followed consistently and correctly, this is not a sufficient condition for meaning to emerge. Thus, any assumption that “real” or “pure” cybercrime is a purely technical crime will be hard to sustain if, as it seems it must, this definition relies on a viable distinction of the syntactic from the semantic. In the following section, some further consequences of such an assumption will be expanded upon, and the resulting need to revisit the idea of cybercrime as a technological rather than a technical offense will be explored.

misCON-ceptions 

The tenuous reasoning behind any conclusion that cybercrime is technical because it is syntactic also needs to be examined in relation to a further, perhaps still more fundamental misconception that has shaped our views of cybercrime since the early 1990s—the failure to read it in terms of technology rather than the merely technical. The consequences of this failure have been serious. First, it has driven us towards technical solutions rather than responses to technology in its richer sense. Second, it has obscured the kinds of epistemic methods that might be effective in delivering actionable knowledge about cybercrime as a species of technology crime. In particular, it has diminished proper appreciation of key social-semantic aspects of cybercrime, such as the meanings or interpretations of what technological misuse involves or the varying modalities of its impact upon victims. Third, by focusing our attention so fully on the kinds of risks which syntax generates through malware or code, it has tended to obscure a more insidious range of risks posed by information technologies—not least their misuse by control agents. Finally, it has impeded effective evaluation of the comparative risks posed by other, arguably far more deadly technological forms.

Given that cybercrime is supposed to be the archetypical “technology crime,” one might expect to find copious studies of how digital technology as a technology has engendered and furthered it. Yet, instead of some of the more critical discussions of technology as a construct found in earlier cybercrime literatures (Grabosky, 2001; McGuire, 2008; Yar, 2006), the technological aspects of cybercrime now tend to be taken for granted when evaluating cyber risk. We might justifiably ask then, what warrants the view that cybercrime represents one of the most serious of contemporary threats posed (by technology)? We know, for example, that automobile technologies generate a level of annual road casualties which far exceed any threat arising from computer misuse (cf. WHO, 2017). We know that the misuse of biological or chemical weaponry threatens a far greater catastrophe to human society than a temporary loss of internet connectivity (McGuire, 2012). And if —as every piece of credible scientific evidence suggests—the technologies contributing to climate change now threaten wholesale environmental disaster, why is this being treated as a lesser problem than data-theft by the corporate-State axis (Yeh, 2017)? Why are these and other examples not even discussed as “technology crimes?” Not only has the perceived threat from digital technology effectively drowned out the risks posed by other technologies, the perceived seriousness of this threat has created an assumed need for “special powers” to manage it. The consequence has been more akin to a society on the brink of war than one where a new technology has impacted crime rates—and also in ways which structurally parallel previous technological shifts.

It is here that questions about the relationship between syntax and method in our understanding of cybercrime become central. That is, the assumption that cybercrime is a predominantly technical/syntactically driven issue has been a key driver behind the further assumption that knowledge about cybercrime is best gathered in largely technical, ergo numeric/syntactic—ways. Thus, the kind of cybercrime research seen as that with the highest utility has been the kind which favors those methods best attuned to numeric, codable representations of the problem. Take for example the prodigious flow of charts, tables, graphs, and other devices produced by internet security companies aimed at depicting the volumes, varieties, and spreads of global malware infections. These are familiar documents to any cybercrime researcher, and while they provide some degree of insight, they offer little understanding about the causes and characteristics of cybercriminality. Specifically, no matter how graphically or emotively compelling such tables may be, all that is really recorded are certain kinds of volumes, many involving incidents which are not even definitively criminal. As we all know, such “research,” even though it is usually laden with vested interests (Yar, 2008) has been responsible for many of the alarmist headlines about cybercrime, and its emotional impacts have often deflected appropriate critical attention paid to the methods used. In spite of this, there has often been as much dependence upon such sources within scholarly research as there has been in the popular media and this has decisively colored what we (think) we know. The predilection for prevalence metrics, cost metrics (Anderson et al., 2013), or measures of percentage rises or falls (invariably rises) in various categories of cybercrime has been one obvious result. And even though research using vague descriptive variables such as “experience of cybervictimization” or “understanding of cybersecurity” has provided a façade of greater sophistication, the ultimate aim usually remains centered upon the goal of producing findings amenable to display in graphs or tables. Even where there have been attempts to deepen understanding—for example by examining the character or motivations of the (remarkably few) cybercriminals who have been apprehended, such studies have tended to rely upon fairly limited demographic or psychological indicators, such as age or willingness to take risks (Aiken et al., 2016; Bachmann, 2010).

The governing perception that our most reliable insights about cybercrime are those obtained via syntactic approaches like quantitative survey research or numeric measures like cost has tended to obscure three methodological problems. First, there is an epistemic gap that lies between technical, cybersecurity driven evaluations of cybercrime and conclusions obtained via more robust social research methods. Since there are no agreed ways of relating—say—a malware report to a survey measure of cybervictimization, there can be no robust justifications for claiming that examples of the former either supports or refutes the latter. Put bluntly, there are simply no reliable comparative metrics which permit us to make the kinds of associations between data gathered in cybersecurity contexts and social science data relating to agency and intention in cybercriminality. Yet, such associations have often been the foundation of causal claims about cybercrime (see amongst many others, Cisco, 2016; Macaffe, 2016; NCA, 2016; Symantec, 2017,). Second, the relative novelty of cybercrime as a criminological phenomenon means that there is little in the way of long-term, well-documented trends against which any credible quantitative patterns that have been detected can be tested or compared. We are literally “in the dark” about meaningful longitudinal trends here, though one would never know it given the authoritative tone in which cataclysmic judgments about the direction of cybercrime are so often made. Third, even judged in terms of fairly limited criteria for quantitative research, knowledge about cybercrime has rarely met very exacting standards or been based on any very advanced methodological techniques. For example, there has been little in the way of effective random control testing and minimal use of more sophisticated analytic techniques such as multivariate analysis, multi-level modeling, factor analysis, Bayesian estimation, simulation, and so on.

Cybercrime research is of course not alone in the naive assumption that what Jock Young (2011) once called “the numbers game” offers the most reliable basis for conclusions about the social world. As Young (2011) pointed out, like social research in general, within cybercrime research, “reality has been lost in a sea of statistical symbols and dubious analysis” (p.viii). Yet, if cybercrime poses the kinds of societal risk we are told that it does, then dependence upon such a limited epistemological palette surely poses a greater risk—that we end up missing significant threats hidden within the granular details. It is not that the kind of data which could provide this more comprehensive picture is wholly absent, and there have certainly been some attempts to view the problem at the micro-level. For example, Williams (2006) research was an early attempt to deploy ethnographic methods in studying online regulation; Holt and Graves (2007) used a qualitative approach to analyze the content of advance fee fraud messages; Hutchings, (2013) combined findings from an analysis of selected court documents with interview data from law enforcement officers within computer crime or fraud specialist units to develop a qualitative study of motivational factors in online fraud and hacking. Elsewhere, Whitty’s (2012) work and similar research used posts from online support groups or interview data with victims to construct a picture of online dating scams. It would be interesting to do a comparison of the relative proportions here, yet the suspicion must be that the volume of qualitative cybercrime data remains inferior to its quantitative counterpart. And even where it is available, the depth of insight in interpreting the data has often been limited. It is one thing to collect interview data with perpetrators, victims, cybersecurity professionals, and other relevant agents, or to point to thematic commonalities within such data; however, it is quite another to draw out the kinds of rich conclusions from such interviews about social life that are foundational to the best kinds of qualitative research such as Benjamin’s (1999) Arcades Project, Park and Burgess’s (1925) fieldwork in Chicago, or Goffman’s (1959) micro analyses of the all-but invisible rituals, norms, and behavioral expectations which striate social life. Existing qualitative work has also often tended to be preoccupied with an underlying policing or criminal justice agenda (how do findings ‘prevent cybercrime’ or lead to more arrests of cybercriminals) rather than developing the kind of deeper variable set required to move our understanding of cybercrime onto a properly social scientific footing.

What options then are there for building up more qualitatively focused cybercrime research, research which might act as a better balance to the volume of numeric/syntactic work that is available? One relatively straightforward option would be to draw upon a greater range of informants to widen understanding, or perform more detailed studies of the behavior of cybercriminals. An additional approach might involve adding to or enhancing existing discourse analysis of online discussions in chatrooms or web forum data (see for example Wong et al.’s, 2015 analysis of white supremacists’ online discussions). Fostering better understanding of the dynamics and strategies of cybercriminality similar to those explored in Holt and Bossler’s (2016) work on “honeypots” offers another option. Enhancing the range of case studies available to researchers would also permit a more full-spectrum exploration of specific instances of cybercriminality. For example, it might generate more detailed profiles about relevant protagonists—from the planning and inception stage through to the crime and subsequent criminal justice response. Genealogical approaches such as that seen in the earlier thematic analysis might usefully contribute to case study work by setting it an appropriate socio-historical framework (even if that history only stretches back to 1991, or to earlier “proto-histories” of digitally connected interaction). More temporally focused work is lacking—especially in relation to time dynamic factors like the evolution of cybercrime events or co-evolutionary interactions between cybercriminal and cybersecurity actors (McGuire, 2018, in preparation). There is also ambitious work to be done in developing more detailed ethnographies around cybercrime, including those that foster greater understanding of the cultural or wider societal factors in the framing of cybercriminality. More ambitious still would be the use of phenomenological approaches in order to construct a more vivid portrait of the subjective life worlds of cybercriminals, their victims, those who attempt to regulate such offenses and beyond, as well as a wider selection of actors who contribute to the cybercrime act. The use of phenomenological tools like the epoché or bracketing of experience (cf. Psathas, 1973; Schutz, 1967,) to discern the key constituents of such acts offers the prospect of the kind of perspectives which have barely been considered as of yet.

However, major problems inevitably remain for the construction of a more effective qualitative cybercrime knowledge base. Aside from the usual questions about how objectively useful qualitative data can be (cf Kirk & Miller, 1985), there is always a suspicion that where qualitative cybercrime research has been conducted, it has tended to be received with a degree of condescension. Worthy, but little more than a corrective footnote to the “more reliable” syntactic/quantitative approaches. Such research has also tended to depend upon those types of respondents who are the most accessible—i.e., those from the control side (such as law enforcement) or the very few victims and perpetrators who are willing to talk. The danger then is that this leaves us with lopsided perspectives on the problem which—however unintentionally—simply reinforces the authority of more syntactically driven perspectives rather than counterbalancing them. At the same time, the fact that certain forms of cybercriminality (like malware creation) do have a strong syntactic element means that ways must be found which balance qualitative with quantitative understanding while avoiding one set of insights becoming submerged or sidelined by the other. At present, we are far away from any kind of useful interplay between quantitative and qualitative approaches to cybercrime. And appeals to mixed methods approaches will not fill the gap since they tells us little about how to tease out the relevant theoretical and empirical correlations and continuities across differing dataset types.

A crucial consideration for any more developed approach to cybercrime knowledge is the need to avoid over mechanical applications of established criminological theories as a device to suggest that a greater understanding has been attained. While it is of course useful to explore how standard criminological frameworks like routine activities, strain theory, subcultural theory, control theory, and the like can help ground our understanding of cybercriminality (see Hay et al., 2010; Holt, 2013; Yar, 2006, for discussions here), this should never stand as a substitute for more direct engagements with the problem. Instead, a more distinctive and self-standing body of cybercrime theory is required, one which can bring together traditional criminological thought and method with new frameworks more appropriate for digital technologies and the psycho-social spatial transformations these induce. In general, the failure to properly engage with the wealth of theory about technology that exists has been a particularly striking omission in this regard. The kind of human-social understanding of technology which has been such a central factor within the philosophy and sociology of technology would do much in helping redress the assumption that cybercrime is a technical rather than a technological problem. In particular, Heidegger’s (1977) observation that the “essence of technology is not technological” (p.4) or the phenomenological approaches to technology it inspired like Ellul (1964) or Borgmann (1984) have never been properly related to the implications of digital technology. This is despite their value in explaining why technology is as much a cultural artefact as it is a technical one. Other influential perspectives about technology arguably provide a still more tangible human-centered understanding, with the extensionalist approach pioneered by McLuhan (1964 as well as Brey, 2016; Gehlen 1965,) of special note here. Extensional views that treat technology as a literal “extension” of the body help rule out the “instrumentalist” claim that technology is a socially neutral object merely waiting to be used by humans (Feenberg, 2002). And, since extension entails that technology is not just part of us, it is us, such views also help explain why technology crimes are just as human-centered as crimes involving our hands or other parts of our bodies. “Post-human” perspectives on technology go even further than this, positing the kind of fusions between the human body and technology—whether as a cyborg or as an actor-network —which make it completely impossible to separate the technical from the human (Haraway 1991; Latour 1987). In so doing, post-humanism also undermines any credible sense in which syntactic/semantic distinctions illuminate the coupling of criminal agency with digital technology. There are the glimmerings of a realization that such richer frameworks might be useful (see van der Wagen, 2018, this issue). For the most part however, technology has usually been taken as a given in most discussions of cybercrime. Where there has been a direct focus upon it, this has tended to involve obsessive descriptions of the relevant “kit,” such as the kinds of operating systems in play, the variety of software protections being utilized, or the influence of emerging digital technologies like IoT, the Cloud, and so on.

Thus, to create an effective semantics of cybercrime, one which can reclaim it as the socio-cultural process that it always has been, we will need interpretations of digital technology that transcend its operations as a syntactic engine and which bridge the ostensibly opposing polarity between syntactic and semantic considerations. If this can be done, it will help underline just how far the more challenging perspectives seen at the early stages of cybercrime research have become confined within self-justifying intellectual loops which tell us what we want to hear rather than what we need to know. In turn, it would offer vital support to the kind of cybercrime scholarship which properly engages with the socio-technical fusions which now surround us. In a post-truth age, it is perhaps appropriate that our understanding of what is so regularly characterized as “one of the most novel of all contemporary criminal threats” centers on little more than the oft repeated tautology that “it is one of the most novel of all contemporary criminal threats” rather than proper, structured comparative evaluation of its technological risk and the human factors behind it.

Conclusion

It is hard to evade the feeling of having been framed in the framing of computer crime. Computer crime’s genealogy, distorted as it has been by the two foundational myths of digital utopianism and digital catastrophe, has never been properly situated within the complex social realities which gave rise to digital crime; nor has its construction as a predominantly syntactic-technical form of crime ever been effectively challenged or related to the rich body of thought about technology and its relations with the social world which is available. The result has been a series of misconceptions—not just about what cybercrime is, or the methods required to develop properly evidenced knowledge around it, but more seriously about the kinds of risk it poses. Even those originally responsible for developing the technical structure of the web have been long been aware that its structure now needs to move beyond the simple syntax which underpinned its origins and to be rethought in more semantic terms—such as in the Web 3.0 idea (Antoniou & Harmelen, 2008). What this means in practice is still a moot point, but at minimum, most agree that it must involve better integration of social factors like trust into the way we interact online. It is odd then that the “social-semantic factors” (cf. Breslin et al., 2010) which are the real facilitators of cybercrime remain so minimally explored within cybercrime theory itself. For the meanings of cybercrime to those who perpetrate it, those who are victims of it, and those who seek to control it remain largely untapped methodological resources at present. A new stage of cybercrime scholarship, one as equally tuned to its real foundation as to those technical solutions imagined to “really work” awaits development.

References

Aiken, M., Davidson, J, & Amann, P. (2016). Youth pathways into cybercrime. Research Whitepaper, Retrieved from https://www.sbs.ox.ac.uk/cybersecurity-capacity/system/files/Pathways-White-Paper.pdf.

Allen, S., & Thorsen, E (Eds.) (2009). Citizen Journalism: Global Perspectives, Peter Lang.

Anaïs, S. (2013). Genealogy and critical discourse analysis in conversation. Critical Discourse Studies, 10(2), 123-135.

Anderson, R., Barton, C., Böhme, R., Clayton, R., Van Eeten, M. J., Levi, M., Moore, T., & Savage, S. (2013). Measuring the cost of cybercrime. In R. Böhme (Ed.), The economics of information security and privacy (pp. 265-300). Heidelberg: Springer.

Antoniou, G., & van Harmelen, F. (2008). A Semantic Web Primer, (2nd Edition), MIT Press.

Bachmann, M. (2010). The risk propensity and rationality of computer hackers. International Journal of Cyber Criminology, 4(1 & 2), 643-656.

Bain, A. (2017). Law enforcement and technology: Understanding the use of technology for policing. London: Palgrave Macmillan.

Banks, M. (2001). Visual methods in social research. London: Sage.

Barlow, J. P. (1996). A declaration of the independence of cyberspace. In J. Casimir (Ed.), Postcards from the Net: An intrepid guide to the wired world (pp. 365-367). Sydney, Australia: Allen and Unwin.

Benjamin, W. (1999). The arcades project. Cambridge: Belknap Press of Harvard University Press.

Bishop, K. (1992, August 2). The electronic coffeehouse. The New York Times.

Breslin, J. Passant, A., & Decker, S. (2010). The social semantic web. Berlin: Springer.

Borgmann, A. (1984). Technology and the character of contemporary life. Chicago: University of Chicago Press.

Bowman B. (2007). Foucault’s “philosophy of the event”: Genealogical method and the deployment of the abnormal. In Hook, D. (Ed.), Foucault, psychology and the analytics of power: Critical theory and practice in psychology and the human sciences. London: Palgrave Macmillan.

Brenner, S. (2012). Cybercrime and the law: Challenges, issues, and outcomes. Boston MA: North Eastern University Press.

Brey, P. (2016). Theorising technology and its role in crime and law enforcement. In M. McGuire & T. Holt (Eds.), The handbook of technology, crime and justice (pp. 17-34). London: Routledge.

Carr, N. (2010). The shallows: How the internet is changing the way we think, read and remember. New York: W. W. Norton and Company.

Cisco, (2017). Cisco annual security report. Retrieved from http://www.cisco.com/c/en/us/products/security/security-reports.html.

CSTB. (1991). Computers at risk: safe computing in the information age. Computer Science and Telecommunications Board, National Academies Press. Retrieved from http://www.nap.edu/books/0309043883/html/index.html.

Curran, B. (2010). Man-made monsters: A field guide to golems, patchwork soldiers, homunculi, and other created creatures. Wayne, NJ: Career Press.

Deleuze, G. (1992). Postscript on the societies of control. October Vol. 59. (Winter, 1992), pp. 3-7.

Dickel, S., & Schrape, J. (2017). The logic of digital utopianism. NanoEthics, 11(1), 47-58.

Ellul, J. (1964). The technological society. New York: Vintage Books.

Farrell, G. (2013). Five tests for a theory of the crime drop. Crime Science: An Interdisciplinary Journal, 2:5 DOI: 10.1186/2193-7680-2-5.

Feenberg, A. (2002). Transforming technology: A critical theory revisited. New York: OUP.

Fitzgerald, M. (2014). The curious case of the fall in crime. The Economist (20 July 2014).

Fox-Brewster, T. (2015, February 25). European cyber police try to shut down ramnit botnet that infected 3 million. Forbes.

Gehlen, A. (1965). Anthropologische ansicht der technik. In H. Freyer, H., J. C. Papalekas, & G. Weippert (Eds.), Technik im technischen zeitlater (pp. 101-118). Dusseldorf, Germany: J Schilling.

Gibbs, S. (2017, July 7). ”Criminal mastermind” of $4bn bitcoin laundering scheme arrested. Guardian.

Goffman, E. (1959). The presentation of self in everyday life. Harmondsworth, UK: Penguin.

Grabosky, P. (2001). Virtual criminality: Old wine in new bottles? Social and Legal Studies, 10(2), 243-249.

Graff, G. (2016, September 23). Government lawyers don’t understand the internet. The Washington Post.

Hackstaff, L. H. (1966). The consistency and completeness of formal systems. In his Systems of formal logic, p.193-206, Dordrecht, Holland: Springer, Dordrecht.

Haraway, D. (1991). A cyborg manifesto: science, technology, and socialist-feminism in the late twentieth century. In Simians, cyborgs and women: The reinvention of nature (pp. 149-181). New York; Routledge.

Hay, C., Meldrum, R., & Mann, K. (2010). Traditional bullying, cyber-bullying and deviance: A general strain theory approach. Journal of Contemporary Criminal Justice, 26, 130-147.

Heidegger, M. (1977). The question concerning technology. In W. Lovitt (Ed.), The question concerning technology and other essays (pp. 3-35). New York: Harper & Row.

HMIC. (2014). The strategic policing requirement: large scale cyber-incidents. Her Majesty’s Inspectorate of Constabulary Report.

Holt, T. (Ed) (2013) Cybercrime and Criminological Theory Fundamental Readings on Hacking, Piracy, Theft, and Harassment, San Diego, Cognella.

Holt, T., & Bossler, A. (2016). Cybercrime in progress: Theory and prevention of technology enabled offenses. London: Routledge.

Holt, T., & Graves, D. (2007). A qualitative analysis of advance fee fraud e-mail schemes. International Journal of Cybercriminology, 1, 137-154.

Hutchings, A. (2013). Hacking and fraud: A qualitative analysis of online offending and victimisation. In K. Jaishanker (Ed.), Global criminology: Crime and victimization in a globalized era. London UK: CRC Press.

Infosec (2017) ‘Technical Anti-Phishing Measures’. Retrieved from http://resources.infosecinstitute.com/category/enterprise/phishing/phishin-countermeasures/technical-anti-phishing-techniques/#gref

King, B. (Ed) (2017) Frankenstein's Legacy: Four Conversations about Artificial Intelligence, Machine Learning, and the Modern World, Carnegie Mellon University: ETC Press

Kirk, J., & Miller, M. (1985). Reliability and validity in qualitative research. London: Sage.

Leonhardt, E., & Röttger, S. (2006). Semantics in philosophy and computer science. University of Dresden technical papers. Retrieved from http://www-st.inf.tu-dresden.de/files/teaching/ws06/HS/Leonhardt-Paper-Introduction.pdf

Latour, B. (1987). Science in action: How to follow scientists and engineers through society. Cambridge: Harvard University Press.

Lessig, L. (1999). Code and other laws of cyberspace. New York: Basic Books.

Leukfeldt, R. (Ed) (2017) Research Agenda: the Human Factor in Cybercrime and Cybersecurity, Hague: ElevenLevy, S. (1984) Hackers: Heroes of the Computer Revolution, Sebastopol CA: O’Reilly Media.

Leyden J. (2001, June 30). European police ill-equipped to tackle cybercrime. Register. Retrieved from https://www.theregister.co.uk/2001/06/30/european_police_illequipped_to_tackle

Lycan, W. (2000). Philosophy of language: A contemporary introduction. London: Routledge.

MacAfee (2016) McAfee Labs 2016 Threats Predictions report, Retrieved from https://securingtomorrow.mcafee.com/mcafee-labs/mcafee-labs-2016-threats-predictions-report-forecasts-changes/

Marcuse, H. (1982). Some social implications of modern technology. In A. Arato & E. Gebhardt (Eds.), The essential Frankfurt school reader (pp. 138-162). New York: Continuum.

Matthews, R. (2016). Realist criminology, the new aetiological crisis and the crime drop. International Journal for Crime, Justice and Social Democracy, 5(3), 2‐11.

Mathiesen, T. (1997). The viewer society. Theoretical Criminology, 1(2), 215-234

McGuire, M. (2008). Hypercrime: The new geometry of harm. London: Glasshouse.

McGuire, M. (2012) Technology, Crime and Justice, London: Routledge

McGuire, M. (2016a) ‘Cybercrime 4.0: Now what is to be done?’ in Matthews, R. What is to be Done about Crime and Punishment?. Palgrave

McGuire, M. (2016b) Technology Crime and Technology Control; Contexts and History, in McGuire & Holt (Eds.) The Handbook of Technology, Crime and Justice, London: Routledge pp. 35-60

McGuire, M. (2017) “Law in The Balance: The Challenge of Cybercrime 4.0” (forthcoming)

McGuire, M. (2018) “Cybercrime as a co-evolutionary relationship: Findings from the ACCEPT project (in preparation)

McLuhan, M. (1964). Understanding media: The extensions of man. New York: McGraw Hill.

Morriss, A. (1998). The Wild West meets cyberspace. The Freeman, Retrieved from https://fee.org/articles/the-wild-west-meets-cyberspace/

Morozov, E. (2011). The Net delusion: The dark side of internet freedom, New York: PublicAffairs.

Müller, V. C. (2014). Pancomputationalism: Theory or metaphor?. In R. Hagengruber & U. Riss (Eds.), Philosophy, computing and information science: History and philosophy of technoscience 3 (pp. 213-221). London: Pickering & Chatto.

NCA. (2016). Cybercrime assessment 2016. National Crime Agency. Retrieved from http://www.nationalcrimeagency.gov.uk/publications/709-cyber-crime-assessment-2016/file

NCSC. (2016) 10 Steps: Malware Prevention, UK National Cyber Security Centre advice note 8/8/2016 Retrieved from https://www.ncsc.gov.uk/guidance/10-steps-malware-prevention)

Noik, R. (2011). AVG report warns about cybercrime catastrophe, TechSmart, Retrieved from http://www.techsmart.co.za/features/news/AVG_report_warns_about_cyber_crime_catastro phe.html

Ophir, S. (2016) ‘Big data for the humanities using Google Ngrams: Discovering hidden patterns of conceptual trends’ First Monday, 21(7).

Park, R. E., Burgess, E., & McKenzie, R. (1925). The city. Chicago: University of Chicago Press.

Parkes, A. (2002). Introduction to languages, machines and logic: Computable languages, abstract machines and formal logic. London, UK: Springer-Verlag.

Psathas, George, ed. (1973). Phenomenological sociology: Issues and applications. New York USA: John Wiley & Sons.

Resnik D. (1998). Politics on the internet: The normalization of cyberspace, in Toulouse, C. & Luke, T. (Eds) The Politics of Cyberspace, 48-68, London: Routledge.

Rheingold, H. (1993). The virtual community: Homesteading on the electronic frontier. Reading, MA: Addison-Wesley.

Savat, D., & Poster, M. (2005). Deleuze and new technology. Edinburgh, UK: Edinburgh University Press.

Schutz. A. (1967). Phenomenology and the social sciences’, in Natanson, M.A., van Breda, H.L. (Eds.) Collected Papers: I. The problem of social reality. 118-139, La Haya, Martinus Nihoff

Searle, J. (1999). The Chinese room. In Wilson, R. & Keil, F. (Eds.), The MIT encyclopedia of the cognitive sciences. Cambridge, MA: MIT Press.

Steinmetz, K. (2016). Hacked: A radical approach to hacker culture and crime. New York: NYU Press.

Sterling, B. (1992) The hacker crackdown: Law and disorder on the electronic frontier. New York NY: Bantam Books.

Stich, S.P. (1983). From folk psychology to cognitive science. Cambridge, MA: MIT Press.

Symantec (2017) Internet Security Threat Report 2017, Retrieved from https://www.symantec.com/security-center/threat-report

Tcherni, M., Davies, A., Lopes, G., & Lizotte, A. (2016). The dark figure of online property crime: Is cyberspace hiding a crime wave? Justice Quarterly, 33(5), 890-911.

Tufte, E. R. (2006). Beautiful evidence, Cheshire, CT: Graphics Press.

Turner, F. (2006). How digital technology found utopian ideology: lessons from the first hackers’ conference. In D. Silver & A. Massanari (Eds.), Critical cyberculture studies: Current terrains, future directions, 257-269. New York, NY: New York University Press.

Van der Wagen, W. (2018). The cyborgian deviant: An assessment of the hacker through the lens of Actor-Network Theory. Journal of Qualitative Criminal Justice and Criminology, 6(2)157-178.

Wall, D. S. (2004). ‘Digital realism and the governance of spam as cybercrime. European Journal of Criminal Policy and Research, 10(4) 309-335.

Wall, D.S. (2005, revised in 2010). ‘The internet as a conduit for criminal activity’. In Pattavina, A. (Ed.), Information technology and the criminal justice system 78-94. Thousand Oaks, CA: Sage Publications.

Wall, D.S. 2008 ‘Cybercrime and The Culture Of Fear’, Information, Communication & Society, 11:6, 861-884.

Wall, D., & Williams, M. (2013). Policing cybercrime: Networked and social media technologies and the challenges for policing. Policing and Society: An International Journal of Research and Policy, 23(4), 409-412.

WHO (2017). Road Traffic Deaths (by country). World Health Organization.

Whitty, M. (2012). The Psychology of the Online Dating Romance Scam, Project report, Retrieved from https://www2.le.ac.uk/departments/media/people/monica-whitty/Whitty_romance_scam_report.pdf

Williams, M. (2006). Virtually criminal. London: Routledge.

Winner, L. (1997). Technology today—utopia or dystopia? Social Research 64(3), 989-1017.

Wong, M., Frank, R., & Allsup, R. (2015). The supremacy of online white supremacists—an analysis of online discussions by white supremacists, Information & Communications Technology Law, 24(1), 41-73.

Yar, M. (2005). The novelty of cybercrime: An assessment in light of routine activity theory, European Journal of Criminology, 2, 407-427. Yar, M. (2006) Cybercrime and Society, London: Sage.

Yar, M. (2008) “The Computer Crime Control Industry: The Emerging Market in Information Security” in K. Franko-Aas (Ed) Technologies of InSecurity: Surveillance and Securitisation of Everyday Life, 189-204, London: Routledge.

Yar, M. 2014 The Cultural Imaginary of the Internet: Virtual Utopias and Dystopias, London: Routledge.

Yeh, J. (Ed) (2017) Climate Change Liability and Beyond, Taiwan: National Taiwan University Press.

Yen, A. C. (2002). Western frontier or feudal society? Metaphors and perceptions of cyberspace. Berkeley Technology Law Journal, 17, 1207-1263

Young J. (2011). The criminological imagination, London: Polity.

Zetter, K. (2013, November 18). ‘How the feds took down the silk road drug wonderland’. Wired.

Contributor

Michael McGuire ([email protected]) is a senior lecturer in criminology at the University of Surrey. His work has focused upon critical approaches to cybercrime and to the study of technology and the justice system more widely. His most recent book The Handbook of Technology, Crime and Justice, (Routledge 2016, with Tom Holt) sets out the first holistic view of the role of differing technologies across each stage of the criminal justice process.

Comments
0
comment
No comments here
Why not start the discussion?