Home / news / How do technology companies help bring war criminals to justice?

How do technology companies help bring war criminals to justice?

Foreign Affairs magazine published this report prepared by writer and human rights defender Alexa Koenig, in which it reviews the role that large technology companies, especially those concerned with social networking sites such as Facebook, Twitter and YouTube, should play in bringing war criminals to justice, and inferring this. With examples from Libya and Syria, in which concrete steps have been taken to bring perpetrators of war crimes to international justice, the author proposes a unique digital system for documenting such crimes, which include mass killing and other atrocities.

The video clip begins harmlessly; A soldier in camouflage pants and a black T-shirt talks to mostly men outside the filming staff, waving his right hand between words. A revolver dangles from his left hand, behind him another man kneels with his hands above his head. But a minute after the beginning of this clip, the officer in the black shirt suddenly turns around and fires, and the kneeling person collapses forward on his knees, while the officer moves towards him in wide steps and shoots this prisoner two more times in the head.

More than three years later, this photographic material becomes pivotal evidence in a new case before the International Criminal Court. Prosecutors in The Hague issued an arrest warrant for the black-shirted man, the Libyan militia leader Mahmoud Al-Werfalli, after videos spread across the Internet documenting his role in the killing of 33 people. This evidence against Al-Werfalli relies mainly on the documentation of social media, and unlike all the evidence previously presented in any of the court cases in its history. Without these videos, the prosecution would have had no case in the first place.

Libyan militia leader Mahmoud Al-Warfali

Although Al-Werfalli’s videos made headlines, they are not unique. Evidence of the world’s most horrific crimes, including mass murder, torture and the destruction of cultural heritage property, is broadcast live moment by moment on platforms such as Facebook, Twitter and YouTube. It is understood that these companies often remove blatant content from the public, but human rights activists and journalists have been engaged in a long debate over the destruction of such evidence, which undermines future prosecutions and deprives victims of the justice they deserve.

The result is that social media companies face a dilemma; They must prevent their platforms from becoming mouthpieces that spread harmful content, but they also need to maintain access to evidence of mass atrocities. A viable solution is also called “evidence vaults,” which are independent repositories of potential evidence found on social media platforms, run by third-party nonprofits, or the social media companies themselves. Similar archives already exist in other areas of law, but tech companies, and with them human rights organizations, need to build on this tried-and-true model to create a repository specializing in documenting war crimes.

The idea of ​​evidence safes is not new, as there are archives that exist to preserve evidence in a number of criminal activities, including terrorism, child pornography and smuggling antiquities, which classify materials of evidentiary, historical or research value, but social media companies or the user who uploaded them may remove them from view. The audience. The existing lockers vary depending on the owner of the content and the party that displays it (social media companies or third parties), according to whether companies are legally authorized to store content, and who has access to the archive.

When designing vaults for war crime evidence, the first priority must be the availability of storing content for extended periods of time, bearing in mind that there is often a long time lag between the commission of atrocities and the start of legal investigations. Given that evidence is often highly sensitive, safes also need to include clear and consistent safeguards for whoever has access to and access to the stored information. The process of designing these systems for archiving content must also take into account a way to prevent reservoirs from reflecting old colonial models, as institutions were expressing the interests and preferences of Western backers at the expense of other countries.

Critics of the International Criminal Court, for example, often cite its restricted focus on crimes committed in Africa. To avoid such biases, prospective evidence vaults need to preserve content regardless of politics or geography. Ideally, laws will protect the data contained by setting a standard that determines which content will be saved, as well as resolving issues related to accessibility, privacy, intellectual property rights, and national security considerations.

In order to bid farewell to past practices, coffers of evidence must give human rights groups and international organizations an opinion on what is being preserved. Governments and companies usually decide on these matters, for example, neither the International Criminal Court nor the existing United Nations bodies that document atrocities in Syria and Myanmar have authority It is now clear that these companies are required to preserve evidence.

This situation must change. War crimes are unparalleled in those governments that are often involved in violence, and thus they cannot or do not want to hold themselves accountable. By allowing humanitarian and human rights institutions to demand the preservation of evidence so that courts or other legal actors have an opportunity to intervene, social media companies can ensure that these important materials are protected from destruction.

In the absence of a law from Congress requiring social media companies to set up evidence coffers, why should companies like Facebook and Twitter do so? It is likely that it will be easy for them to simply delete the content and avoid the complicated bureaucracy of law enforcement, human rights organizations, and international courts.

International norms provide one answer. According to the UN Guiding Principles on Business and Human Rights, social media companies are expected to protect human rights, avoid practices that cause or contribute to abuse, and prevent or mitigate the violation of these norms associated with their operations. It can be said that saving content from social networking sites and sharing that data with the competent authorities falls within these guidelines. Realistically, several large tech companies – including Microsoft and Facebook – have voiced their support for these principles, announcing steps towards better alignment of their practices with them.

Companies also have material incentives to share. There are legislators, advertisers, company employees and users who have pressured companies to improve their support for human and civil rights. For example, civil rights groups recently launched a campaign under the hashtag #StopHateForProfit (Stop Hate For Profit) to pressure Facebook to tag hate speech and information. Google employees also protested the company’s interference in the “Muffin” project, an initiative originating in the Pentagon that made improvements to the use of artificial intelligence in a number of military purposes, including drone strikes, and Google eventually withdrew from the project. Threats such as enforcing regulations, losing advertising revenue, and destroying company employee morale, in addition to bad reputations, are powerful incentives for companies to improve their records in bringing war criminals accountable.

Many companies, for their own sake, have already begun to engage with human rights organizations to discuss the possibility of setting up evidence coffers for war crimes. Encouraged by human rights researchers, social media companies have popularized the idea of ​​contributing data to an independent treasury or simply keeping the information for the company. At the very least, many of these companies recognize that they have an ethical, and sometimes legal, obligation to protect the public from harmful content while ensuring that this important information remains available to international justice efforts.

However, challenges still exist, one of which is figuring out what content is worth keeping. Like “terrorism” and “hate speech,” “war crime” can be an unpleasantly elastic term, and this makes it difficult to automatically detect relevant content, as algorithms do not do well with vague terms. Activists and companies can still solve this problem given previous efforts that fought against illegal practices. For example, companies can rely on definitions used by official bodies such as the Office of the United Nations High Commissioner for Human Rights, or focus initially on clear issues of abuse, such as murder. Collective or use of chemical weapons, which lends itself well to detection by both human and robotic observers.

Another challenge is figuring out how to make content available to war crimes investigators while preserving the privacy of the users who published the data, and also the privacy of those who appear in the content, which is a legal and ethical concern at the same time, one of the ways to address these privacy issues is to take the safe way of saving data. But then it is determined who can access it and under what conditions.

In early December, the Office of the United Nations High Commissioner for Human Rights, in partnership with the Human Rights Center of the University of California Berkeley Law School, will launch the Berkeley Protocol for Open Source Digital Investigations.

The protocol, part of the global effort to set standards for the use of virtual content in international criminal cases, reflects a growing recognition that virtual information has the potential to tighten accountability for human rights violations around the world. Al-Werfalli case is just one example. On October 5, human rights groups filed a historic case in Germany accusing Bashar al-Assad’s regime in Syria of committing war crimes, and their pleadings were based on both traditional evidence and information sourced from social media. Many of the other cases currently being tried in European courts rely on a similar mixture of evidence.

As soon as the evidence gathered from social media becomes widely used in war crimes cases, companies and human rights organizations must devise a discreet system for preserving irreplaceable content. Without a system like this, it is even the brutal collective actions that are documented moment by moment on social media. They are at risk of impunity.

ـــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــ

This report is translated from Foreign Affairs and does not necessarily reflect the Maidan website.




Source link

Leave a Reply