Categories
Publications

Content Censorship

Content and Censorship

The practice of censorship dates back thousands of years and has probably existed from the times when religious debate, political discourse and folklore first began. It can be found as early as the Old Testament – “Thou shalt not take the name of the Lord, thy God, in vain” – and in Plato’s proposed ideal society in The Republic where officials would prohibit the telling of stories that were deemed detrimental. The term “censor” is derived from the Latin denoting a Roman magistrate who took censuses and oversaw public morals1

The 20th century abounds with examples of the censorship of literature, art and free expression. Examples include the burning of books in Nazi Germany, the blanket censorship of public opinion and publications in the Soviet Union and apartheid-era South Africa, and legislation such as the US Comstock Act that prohibited dissemination of any “…article of an immoral nature, or any drug or medicine, or any article whatever for the prevention of contraception or procuring of abortion…”2 These and other examples illustrate the significance of censorship in modern as well as earlier times.

 

The growth of ICTs and the development of an information society have issues of content and censorship at the forefront of debate around individual rights and the restrictions and limitations associated with them. For some, unrestricted access to information and free expression are critical components of an information society, yet many would agree that regulation is needed to set boundaries to what is permissible. Governments and commercial agencies walk a fine line between these two positions: tasked with providing and at the same time filtering content that is at the core of this new society. As a result, policy makers and civil rights advocates today may define the reach and influence of content and censorship for the foreseeable future.

 

Defining censorship

 

There is no single definition of censorship that satisfies all criteria. Interpretations of its meaning are often entwined with emotional or ideological positions on freedom, repression and the relationship between the state and the individual.

 

In the broadest sense of the term, censorship can be described as the influence exerted to modify or forbid the original message from communication.

 

 

The online edition of the Oxford English Dictionary, more narrowly, defines censorship as a noun derived from “censor” – the title of an official who examines material that is to be published and suppresses parts considered offensive or a threat to security. This definition might be updated today to include restriction of access to content on by users within a particular network or nation state.

 

Censorship generally coexists with self-censorship, the choice by authors and content providers to exclude material which they believe may offend legal restrictions or social norms (whether of society as a whole or of constituent communities). The phenomenon of self-censorship is at once elusive and often invisible. It may be the psychological result of living in a society that accepts some forms of censorship, historical influence, pressure to abide by legislation and cultural norms. The growth and prevalence of surveillance in today’s ICT infrastructure, as well as the influence of intellectual property and libel legislation, has made it more difficult to assess the effect and presence of self-censorship in modern publications and communication of content.

Narrowing the Definition

 

The process of creating, disseminating or publishing content, and of ensuring that it reaches an audience is referred to in this chapter as the “information cycle”. Censorship is understood in this chapter to refer to the censorship of existing content and/or of the means to distribute it. The channel for communicating content is therefore the point at which censorship occurs.

Censoring the distribution channel can take place at two different points within an information cycle:

  • It can occur pre-publication, when the original message is prevented from being disseminated. This includes self-censorship, legislation and editorial or managerial interference with material to be published or made available to the public.
  • And it can occur post-publication, when an audience is prevented from accessing a message or content communicated to it. Today, this includes access to Internet websites and online services.

 

Censorship can at once define the practice of blocking an entire subset of content or some parts of it. It may, for example, be that a book is allowed to be published, once an offending or possibly libellous passage is removed. At other times, an entire article or web domain may be banned or closed from access for citizens of a country. This chapter sometimes refers to the selection of content to be modified or banned as “content filtering”, and the restriction of access to an entire subset of content as “censorship”. The distinction is made necessary when defining and explaining the technology involved with censoring content on the Internet.

The denial of access to an entire communications technology or service is a problem in today’s information society, but its inclusion as “censorship” is subject to debate. Does the absence of particular transmission media such as internet access, short wave radio frequency or satellite television for an entire state or region constitute censorship or an issue of access rights? The presence of different media formats and information sources is a prime determinant in the availability of diverse content. However, there are a number of reasons why access may not be present – some of which are geographical and technological while others can be attributed to policy decisions. This chapter deals specifically with existing content and its communication channels. Issues of universal access are discussed in Chapter 17.

 

Content censorship is also intertwined with issues of copyright, access to public information, internet governance and policy, broadcasting and communication rights. This chapter does not cover in detail policy and debates described elsewhere in this handbook but the reader should be aware that discussion around censorship is strongly influenced by these issues. Please refer to Chapter 11 for more information about broadcasting regulations, Chapter 20 for internet governance, and Chapter 33 for intellectual property rights,.

 

A framework for analysing content

The nature of content is made up of a number of distinctive elements, including the originator/author, time of creation, the message itself and its intended audience. One way to classify content when dealing with censorship is to analyse the censor’s motivation. Understanding the elements that prompt censorship may help identify its purpose or define the accuracy and appropriateness of legislation permitting its use. If motivation is not explained or provided for in legislation, one may eliminate possible causes to determine the purpose.3

The originator and time of creation

 

Apart from computer-generated data, all content is created by people, either acting as part of a group or acting by themselves. Content is borne in thought and reproduced in speech, through physical action, on paper,or through media such as broadcasting and the internet. The originator(s) may be identifiable or anonymous, or may use pseudonyms or maliciously misleading names.

 

In the past, the time of creation was often imprecise. Today, when originators/authors use ICTs, the time of creation is usually recorded and assigned to content automatically. Mobile phones, digital video and photography devices, personal computer and digital network devices operate with an internal clock. It is used to keep a log of activity or assign a time stamp to the created content.

 

Similarly, the majority of ICT devices have a unique identification number. This may be a product code, a part number or a MAC4 and IP address for internet-connected devices. Content created or sent from these devices is likely to be identifiable back to the source.

Governments often rely on these identifiers to assist with fighting crime, as is proposed by the European Union Data Retention Act of 2006 – requiring each member state to trace and identify the source and destination of a communication; to identify:

 

  • its date, time and duration of a communication;
  • the type of communication;
  • the communication device;
  • and the location of mobile communication equipment;

and to keep this data for a period from six months to two years5.

 

The OpenNet initiative studies trends and technology of Internet censorship and content filtering around the world. It has developed a list of agencies and information sources whose identity often causes the motivation for censorship, as follows:

 

academic, blogs, chat and discussion boards, government, government media, international governmental organisations, Independent media, Individual, International NGOs, Labour groups, Locally focused NGOs, militant groups, political parties, private businesses, religious groups, regional NGOs6

The message

The primary reason for censorship can be either the type or category of content involved or the purpose of the message contained within the content communicated.

The characteristics of a message that may be prohibited by censors include:

  • Socio-cultural content or opinion. Examples are issues relating to the rights of women, minorities, freedom of expression, media; environmental and economic development issues, “hate speech” and interpretation of arts, history and literature, etc.
  • Religious content or opinion. Examples include insulting and offensive references and descriptions of deities, religious hatred, conversion, commentary and criticism.
  • Political content or opinion. Examples include political transformation and opposition parties, reform, governance, legislation and international relations
  • Content which is believed to pose a threat of conflict or a threat to security, including material that is identified with terrorism, military and extremism, separatism and state secrets.
  • Erotic content – i.e. text, images and audio/video material suggesting or portraying acts of sexual intercourse, provocative attire, pornography and references thereto, the nature of whose acceptability varies significantly between national jurisdictions.
  • Licence restrictions and limitations – message content that is protected by intellectual property rights, whose use or distribution may require appropriate permissions and licence.
  • Censorship circumvention tools or tactics – including internet and software tools, publication and broadcasting channels, as well as methods and suggestions for how to bypass existing censorship techniques and technology.

Examples of the purpose of content which can lead to censorship include:

  • To inform an audience with information about particular facts or opinions, events or news items
  • To disinform an audience with false information, with unintended or malicious purpose. Often it is the censor that decides the validity of the message.
  • To threaten or incite action in response to or because of a particular person, group, idea, race, sexual orientation, profession, opinion, organisation or government. This includes incitement to violence, protest, racial hatred, social action and civil disobedience.
  • To defame a person or people, organisation or idea with statements, opinion or accusations intended to harm or destroy their reputation, status and credibility.

Intended Audience

Content may also be censored according to its intended audience – for example because it is targeted towards citizens of a particular nation, region, race or language group. In the context of modern ICTs, this may include the users of a particular network service or communications tool, receivers of television transmission, radio broadcasts and satellite streaming. Content created and disseminated on the internet is usually intended for a global internet-connected audience. Considering the technical difficulties involved in preventing content from reaching an internet platform, censorship of material on the internet usually occurs after publication.

Content censorship in the international framework

 

The disparity between the interpretation of rights of people as opposed to citizens of a nation, is highlighted in the debate around censorship.

 

Articles 18, 19 & 20 in the Universal Declaration of Human Rights (UDHR7) declare that every person should have the right to freedom of thought, religion, opinion, expression and association, as well as to “..seek, receive and impart information and ideas through any media and regardless of frontiers.’’ Freedom of association has been defined as the individual right to come together with other individuals and collectively express, promote, pursue and defend common interests8.

 

Although these instruments pre-date the internet, this freedom and the related right to seek and impart information imply a right to visit any website or internet forum, the digital equivalent of a commons. The right to receive and impart information through any media has been used to argue that in the international framework all content censorship on the internet is a violation of the UDHR, though this interpretation is highly contested.

 

The International Covenant on Civil and Political Rights (ICCPR9), which stems from the UDHR and aims to create concrete legal mechanisms to assist its implementation, has been ratified by 160 countries. Points 1 and 2 of Article 19 confirm the individual’s rights to free expression and communication regardless of the location and the medium used to achieve this. These rights are tested in section a) and b) of Point 3 and clarify that, where there is felt to be a need to protect the rights or reputation of others or to protect public order and national security, then provisions to this effect should be established in national law.

 

Limitations in Article 20 – that prohibit unlawful attacks on honour and reputation, advocacy of racial or religious hatred and incitement to hostility and violence – can be interpreted as justification for attempting to censor communication of this type of material. The relation between different rights, and the extent to which rights may be competitive, is highly controversial . The few areas in which limitations to freedom of expression have been widely (but not universally) agreed include the denial of Holocaust, child pornography and paedophilia-related content.

 

Although written several decades ago, reviews by the office of the UN High Commissioner for Refugees and other bodies have agreed that the introduction of the internet should not require an update to the signed principles. More information about issues of communication rights can be found in Chapter 5.

 

In practice, the majority of countries that censor information do so according to national constitution and legislation. Thailand and Spain, for example, prohibit any act of lèse majesté (offence to the royal family), while Turkey bans all content that is deemed to insult Turkish nationhood or the founder of modern Turkey, Kamal Attaturk, as well as references to the Armenian genocide. Saudi Arabia bans the publication of internet content that “breaches public decency”, “infringes the sanctity of Islam” and “anything contrary to the state or its system”. Germany and France actively pursue and shut down (or censor the access to) any publication which denies the holocaust or makes available Nazi memorabilia.

National implementations of content censorship

 

The availability and use of modern ICTs has radically changed the.nature of publication and communications. Conventional publishers and registered media companies have ceased to be the primary source of published information available to the public. The traditional one-to-many distribution model has also been displaced, to a significant degree, by the many-to-many attributes of a decentralised Internet.

 

National legislation regulating content and publication has proved to be ill-equipped for the innovations of the internet where material created in one jurisdiction can instantaneously be accessed in any country with internet access. Virtual chat rooms, fora and blogs are among the services that have enabled cross-cultural communication amongst millions of internet users, irrespective of national borders and laws.

 

Governments have resonded to these changing circumstances in a number of ways, including the extension of existing laws to cover internet services, digital broadcasting, streaming and internet telephony, including the extension of surveillance and log keeping in these areas. Media regulations have also been extended in some national jurisdictions to cover internet publications. Anyone wishing to create a blog in a particular country may be required, for example, to register as a media company, enabling existing laws covering broadcasters and print media to be applied across the blogosphere10. On the face of it, this approach may ease the legal considerations for controlling online content, but a blog often represents a single person’s opinion, does not undergo editorial process and is often published with disregard as to the physical location of the website servers (i.e. law governing content in a country) and the sensitivities of readers.

 

Examples of legislation affecting access to digital media content include:

  • The Australian Broadcasting Services Amendment (Online Services) Bill 1999, which establishes the authority of the Australian Communications and Media Authority (ACMA) to regulate internet content. Web content hosted on Australian and foreign servers is classified by the Office of Film and Literature Classification. Hosts of content classified as prohibited may be issued with a take-down notice if located in Australia or a website will be added to official internet filtering software lists.11
  • The Chinese Provisions for the Administration of Internet News Information Services define online news content as “…information, reports and comments on current affairs, politics, economy, military affairs, diplomacy, public emergencies and other public affairs…” Article 5 of this provision requires any website or bulletin board wishing to publish content that has not already appeared on official websites, to be subject to the examination and approval of the Information Office of the State Council.12
  • The Vietnamese Decree on Cultural and Information Activities subjects those who disseminate “reactionary ideology” including revealing (party, state, military, and economic) secrets, who deny revolutionary achievements, and who do not submit articles for review before publication to fines of up to thirty million dong (about US$1500).13
  • The Internet Watch Foundation, an independent self-regulatory body of ISPs which was established in the United Kingdom in 1996 runs a hotline for members of the public to report what they believe is illegal child pornography . Working with law enforcement and government agencies, the Foundation sends ‘take-down notices’ to its member ISPs advising which online content should be blocked to customers.14
  • India requires within its ISP licence agreement that ISPs “…ensure that objectionable, obscene, unauthorised or any other content, messages or communications infringing copyright, intellectual property rights and international and domestic cyber laws, in any form or inconsistent with the laws of India, are not carried in his network, the ISP should take all necessary measures to prevent it…”15

Content censorship in practice

 

As previously discussed, content can be censored throughout the information cycle. The method and technology used largely depend on the chosen channel for disseminating content. A censor’s intervention takes places either prior to publication and/or post factum. The phenomenon of self-censorship is not easily classified and factors leading to its presence can be discussed in theory and with the provision of past use cases as examples.

 

 

The ICT communications model

 

ICTs offer many opportunities, through the use of broadcasting and telecommunication networks, to distribute content irrespective of geographic boundaries or prior connection between the source and receiver. The internet allows all users to publish content to the network itself. The lack of a technical and political hierarchy on today’s internet provides the ability to stream existing content on the network without preconditions.

 

Two common technical methods exist to communicate content over the internet and with modern ICTs. One involves the use of a central server to receive and disseminate content. This applies to the majority of website publications, email and instant messaging exchange and mobile telephony. It is also the most likely (and technically feasible) model on which to implement censorship technology. The majority of content censorship systems are installed on servers and networking devices . This is described in more detail below.

 

There is a possibility on the Internet for the creation and use of private communications channel that do not rely on a central server. This model is known as peer-to-peer networking and it allows users to create custom connections between one another using the Internet. Currently it is commonly used for file sharing (including music downloads) and Internet telephony (e.g. Skype).

 

Another possibility is to use virtual private networks (VPN), sometimes known as intranets, to share content and communication irrespective of national boundaries. Secured by encryption, these networks can be imagined as operating on top of the existing internet or telecommunications infrastructure and are therefore not affected by standard censorship and filtering mechanisms.

Self-censorship

Precedent often creates the primary reason for self-censorship, perhaps the most widespread yet least apparent form of censorship. The music and software industries, amongst others, have led several high profile “example cases” to persuade others not to violate their intellectual property rights, though with variable success16 A number of countries have jailed writers and journalists for posting opinions and calls for democracy on the internet; others have been successful in identifying and prosecuting individuals disseminating paedophile material on the web.17 Their cases are widely publicised and serve as a deterrent to others.

 

The technology of content censorship on the Internet & digital networks

 

A basic technical grounding in Internet architecture and addressing is required to understand how censorship of digital content is implemented on ICT infrastructures, the possibilities for error that go hand-in-hand with this technology and other methods utilised to censor access to online content, These issues are discussed in Chapter 18, and only briefly summarised here.

The internet is a “network of networks” that consists of millions of private and public, academic, business, and government networks of local to global scope. National

carriers and large corporations have direct connections to the internet backbone, commonly referred to as the Tier-1 and Tier-2 networks and operated by private companies around the world. There are several methods of distributing internet access from the backbone to end-users, but one that is most relevant to this chapter involves the dissemination of access to Internet Service Providers (ISPs) around a given country. These ISPs form their own networks of users, that comprise the majority of the world’s internet-connected population. Users connect to their ISPs from home, public spaces, mobile telephones and office networks.

 

Generally speaking, every connection to the Internet is made via a hierarchy of nodes, hereafter referred to as gateways. The primary gateway is the software application used to transmit content from the computer to the secondary gateway. It can be an internet browser, instant messaging tool, email client or Internet telephony software. If the user is located within an office or Internet café environment, the secondary gateway will be that network’s infrastructure. Thereafter, information is relayed to the ISP and further on to the national gateway. This will become the final access point before content is transmitted to the internet backbone. The pathway may now be reversed in order to arrive at the recipient’s computer or at the destination website.

 

One of the exceptions to this description concerns national network traffic. If the a requests to see a website or other Internet service that is located within the same country, it is the ISP that will become the final gateway directing the information route.

 

Another exception involves the use of a satellite internet connection. In such cases, the second or third gateway (the one that forwards content beyond the computer’s physical location) will be a satellite in orbit. Content will probably be transmitted back to earth in a different national network or directly onto the Internet backbone.

 

Filtering of content sent to the internet and censorship of website access requests may occur at any of the gateways described above, including at the application level, office network, ISP, national gateway and so on. This is described in further detail later in this chapter.

Website Censorship

 

Websites can be blocked from access using one of three common methods: IP address blocking, tampering with copies of the domain name system, and blocking of URLs. In simple terms this means that websites can be blocked according to their internet address, their name, or the system that translates their name into an internet address.

In some countries, website censorship exists primarily at the behest of the computer user – for example a parent blocking access to some categories of website on a child’s computer – or the network manager. This is implemented through the installation of content filtering software on an individual PC or network gateway.

 

The majority of countries that censor websites because of content have designated the responsibility to ISPs to install and run censorship software (filters). Others, however, have chosen to place filters at the gateways to the internet backbone. All traffic must pass through these national filters before it reaches the internet proper. China and Pakistan are countries to implement filtering software, with various aims and consequences, at both levels of the national internet infrastructure.18

 

Blacklists & DNS Tampering

 

Although they vary in costs and point of installation, all website censorship systems operate on a similar principle. Requests made by the user for a particular website are checked against a list of banned URLs. If the match is positive, the request is denied. Similarly, blacklists may contain IP addresses of servers and deny requests to the address when the URL has been translated by the DNS.

 

This is the most widespread method of censoring websites on the Internet. Another method to prevent users from visiting banned websites is to tamper with the locally (ISP) stored copy of the DNS. Requests for a specific URL may be translated to an alternative IP address rather than that where the real site resides. This censorship technique may be circumvented, however, by users specifying to use one of the root servers as their DNS point of reference, rather than to the local copies stored by their ISPs.

 

For example, during the August 2008 war between Russia and Georgia, the Georgian government ordered the country’s main ISP – Caucasus Online – to block all websites in the .ru domain. This meant that any website registered in the Russian .ru domain was not accessible for users receiving their Internet through this ISP.19

Keyword Filtering

 

A relatively new method of censorship but one that is gaining strength and widespread implementation is keyword filtering. This involves the banning of certain words or phrases, either in a URL or a page’s content. The system allows for broader ability to censor websites and internet communication by content, as well as enabling the blocking of a particular page within a website and not the entire site itself. However, keyword filtering can be crude, and is likely to prevent access to innocuous as well as intended target sites.

 

For example, a keyword filter might be set to ban the request for any URL containing the phrase “women’s rights”. This would leave the “www.apc.org domain accessible, but will block the “www.apc.org/reports/womensrights.html” page. Similarly, such a filter might be programmed to deny any requested website that contains the banned phrase in its title or within any sentence of the site’s content. URL keyword filtering has been noticed in the Chinese, Iranian and Yemen Internet infrastructures.20

 

Filtering content for banned keywords can also be performed at the application level. An example of this practice is the Chinese version of the Skype messaging and telephony tool, TomSkype. When a user types a banned keyword or phrase into the chat window, TomSkype prevents the message from being displayed on screen and will log the offending phrase and the user’s IP address to a separate server.21

Access Denial & Error Messages

 

When requests to access a website are denied by filtering software, a response is usually generated to the user. Some configurations present a page that confirms the website as forbidden for access from a particular network or country. At other times, the user may be offered to fill in a form requesting the removal of a website from the filtering blacklists, as is the case in Saudi Arabia. It is also possible to receive a standard browser error message as if the website’s URL were misspelled or such an address simply does not exists, as seen in Tunisia and Uzbekistan.

Filtering and Censorship software

 

Software used to block websites and filter keywords in Internet requests has been developed by private companies and in-house government ministries. WebSense, Content Watch and Fortinet are examples of filtering software companies, whose products are used in educational, corporate and national network environments. SmartFilter, a product of Secure Computing, categorises its self researched URL lists by topic, including “Abortion, Adult Material, Education, News & Media, Illegal or Questionable”22 and so on, allowing customers to add their own URL addresses to the existing lists.

 

Countries have also been known to develop their own customised filtering software and lists of banned URL and IP addresses. China is reported to have up to thirty thousand internet police, whose role includes the search for websites to add to these banned lists. Similarly, search results may be modified by filters. In Iran, a search made in Google for a banned keyword or phrase may return a “no articles found” result.

Geolocation and reverse filtering

 

Another trend in content filtering is based on the geographical location of the user. Websites determine the user’s location by the originating IP address of the request. Since this information is publicly available, a website can easily determine which country the user is in and modify or deny serving content to them.

YouTube has began to implement a country based restriction for displaying certain videos For example, video with content considered illegal in Thailand, specifically lèse majesté, carries the “TH” tag in the YouTube page code and is unavailable from Thailand.23 The Sun Microsystems Java download site abides by the US export control laws and restricts downloads for users located in embargoed countries24.

 

When accessed from China, Google will present different search results when queried upon specific topics, such as “falun gong” and “Tienanmen Massacre”, than when accessed from another country. A message will be displayed at the bottom of the result screen stating that “According to local laws and regulations and policies, some search results are not displayed.”25

The accuracy of content censorship & filtering models

 

Past experience has shown that under-blocking and over-blocking is inherent in content censorship and filtering systems. It is virtually impossible manually to find all content as per the filtering agenda, in all desired languages, to add to the banned lists, especially as it is easy for content providers to migrate their locations in order to avoid banning. Likewise, it is difficult to process and include newly published websites as they appear online. This results in the majority of countries that use filters relying on updates to lists from software vendors and adding customisations of their own.

 

URL keyword filtering will often result in websites being added to the blacklists when their content is irrelevant to the censor’s agenda. If, for example, the word “sex” is included in the filter’s definitions, all URLs containing it will be banned. Health education websites and support groups for the sexually-abused are among those that will be included in the ban.

Banned lists containing IP addresses of banned websites may also result in over-blocking. As mentioned previously, a webserver with a single IP address may host many other websites. For example, if the 217.72.179.54 IP address hosting the “www.apc.org” website were banned, this would also prevent users (within the same network or jurisdiction) from reaching 32 other websites including “www.takebackthetech.net” and “www.genderawards.net” among others hosted on this server.

Digital barriers to web resources

 

Governments have been known to restrict external (international) Internet traffic during national emergencies and elections. Government ministries or corporations that control internet backbone gateways have been instructed to shut them down. Only satellite internet connections, if any, remain operable in such cases.

 

Restricting the bandwidth capacity of an internet backbone gateway, whether to a specific IP address range or across the entire system, will drastically reduce the number of successful website requests able to get through to the Internet. The majority will simply time out, without the usual evidence of a website being blocked by censorware. The use of this technique was suspected during the 2006 elections in Belarus.

 

Distributed Denial of Service (DOS) attacks have been used to overwhelm a connection to a website’s hosting server with a flood of requests. Websites hosted on servers with a bandwidth capacity lower than the amount of data being requested, will become unavailable to most. A DOS attack would normally be perpetrated by a botnet (thousands of computers controlled by a single operator, most likely as a result of their infection by a virus or malware) and can be directed at a single website or even a range of national Internet resources, as was witnessed in Estonia in 2007.26

 

 

Trackback: APC ICT Policy Handbook

1Online Etymology Dictionary, http://www.etymonline.com/

2″Comstock Act (1873).” Major Acts of Congress. Ed. Brian K. Landsberg. Macmillan-Thomson Gale, 2004. eNotes.com.2006. 31 Mar, 2009 <http://www.enotes.com/major-acts-congress/comstock-act>

3There may, of course, be instances where censorship exists for no apparent purpose and without any regulation affirming or allowing its use. It is advisable to refer to the international framework defining rights of man to freely seek and impart information, as is detailed in the following section, should this be the case.

4Media Access Control address (English, abbreviation)

5Directive 2006/24/EC of the European Parliament and of the Council of 15 March 2006 <http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:32006L0024:EN:HTML>

6Ronald Deibert, John Palfrey, Rafal Rohozinski, Jonathan Zittrain, eds., Access Denied: The Practice and Policy of Global Internet Filtering, (Cambridge: MIT Press) 2008

7Office of the United Nations High Commissioner for Human Rights http://www.unhchr.ch/udhr/lang/eng.htm

8Jeremy McBride, Foredoom of Association, The Essentials of Human Rights, Hodder Arnold, London, 2005, pg.18

9Office of the United Nations High Commissioner for Human Rights http://www2.ohchr.org/English/law/ccpr.htm

10Institute for War and Peace Reporting, “Internet Hit by Media Law Change, 30 January 2007

11Electronic Frontiers Australia <http://www.efa.org.au/Issues/Censor/cens1.html>

12Law Info China <http://www.lawinfochina.com/index.asp>

13The OpenNet Initiative, Research Profiles, Vietnam <http://opennet.net/research/profiles/vietnam>

14The Internet Watch Foundation, <http://www.iwf.org.uk/public/page.31.htm>

15Government of India, Agreement for Provision of Internet Services, <http://www.dot.gov.in/isp/licence_agreement.htm>

16On January 30th 2007, Alexander Ponosov, a school principal in Eastern Russia was sued for USD 10,000 by Russian authorities for purchasing 14 computers that were running an illegal version of Microsoft Windows for his students. The Recording Industry Association of America has filed suits against hundreds of students for sharing music online.

17Reporters Without Borders <http://www.rsf.org/rubrique.php3?id_rubrique=119>

18Ronald Deibert, John Palfrey, Rafal Rohozinski, Jonathan Zittrain, eds., Access Denied: The Practice and Policy of Global Internet Filtering, (Cambridge: MIT Press) 2008

19Civil.ge <http://www.civil.ge/eng/article.php?id=19284> and Reporters Without Borders <http://www.rsf.org/article.php3?id_article=28167>

20Ronald Deibert, John Palfrey, Rafal Rohozinski, Jonathan Zittrain, eds., Access Denied: The Practice and Policy of Global Internet Filtering, (Cambridge: MIT Press) 2008

21Nart Villeneuve, Breaching Trust – An analysis of surveillance and security practices on China’s TOM-Skype platform, (Information Warfare Monitor) 2008 <http://www.infowar-monitor.net/breachingtrust>

22Secure Computing, Secure Web SmartFilter <http://www.securecomputing.com/index.cfm?skey=86>

23The Nation, Thailand <http://nationmultimedia.com/2007/08/31/headlines/headlines_30047192.php>

24Bureau of Industry and Security, US Department of Commerce, “Regional considerations” http://www.bis.doc.gov/policiesandregulations/regionalconsiderations.htm and Sun Java licence agreement http://java.com/en/download/license.jsp

25(Rough translation from Mandarin) ???????????????????????

26The Guardian, <http://www.guardian.co.uk/commentisfree/2007/may/17/virtualwarfare>