Biometric Surveillance Failures: Where is the Digital Do No Harm in Humanitarian Settings?

Updated: Sep 24, 2021

The modern version of digital do no harm has yet to be meaningfully addressed in humanitarian settings—frequently relegated to the corners of digital security and technology as a limitation of current systems. But, in the last six months, two violations of biometric data breaches among Rohingya [1] and Afghan [2] refugees underscore the urgent need to bring the digital data security conversation back to fundamental bioethics and humanitarian principles. At the same time, Microsoft is quite literally moving into the UN [3].


Attaining meaningful digital protection for refugees and displaced communities will not be found in another rapid checklist or discrete technology fix. Instead, it requires a deeper level of inquiry into the role of humanitarian-technology partnerships and where data is positioned within humanitarian organizational culture —the de facto connector between stated guiding principles and the applications of those principles into action.


Data About the Refugee Communities Versus Data For Refugee Communities

Data collection practices have been inculcated into humanitarian organizational culture over the last thirty years. They have become a valuable part of the organizational culture 'toolkit,' unsurprisingly, for two stated reasons. First, to justify funds and, in turn, aid existence. Second, to optimize community services. However, the lines often cross in terms of what data from refugee/displaced communities is actually being collected and for what purpose.


In particular, data to increase service utilization is often characterized as data to improve service delivery, and in turn, prioritizing communities. But, at the same time, it commonly serves donor reporting and donor accountability purposes. Given constrained resources, isn't increased efficiency a good thing? Yes, but not if it comes at the expense of the stated mandates to humanitarian communities. The duplication of data collection rationales translates into an investment, and motivation in data systems and priorities will always inherently skew towards donors, given the ongoing need to perform for continued funding.


In itself, data-driven decision-making is not only uncontroversial but yields multiple benefits. But, in operational terms, the danger for humanitarian communities lay in the translation of humanitarian principles applied to data priorities and protections for communities.

Indications strongly suggest similar trends extend into digital data collection and usage when it comes to digital data. The benchmark has moved from data systems reinforcing power inequities in a pre-digital data era to reinforcing power inequities and facilitating additional harm to communities in a digital data era.

The New Digital Partnership Era


UN-corporate technology collaborations have been characterized as driving a new form of partnerships to meet humanitarian goals. UNHCR staff referring to the Connectivity for Refugees Initiative described these partnerships as just that in 2016 [4]:


"Connectivity for Refugees is forging new partnerships and seeking smart investments, with companies from Mobile Network Operators and telecommunications businesses to technology giants like Microsoft, Google, and Facebook. But the partnerships are not forged in the typical model. We can't assume that the private sector is waiting to write checks to give us sums of money. What they want to do is engage with us. They want to help solve the problem together; they want to apply their expertise and knowledge to the problem with us."


There are two sides to this non–typical partnership characterization: innovation and what could arguably be described as a more proactive form of corporate philanthropy. Still, the other is creating more commercial inroads while contributing to social good. Notably, the two are not mutually exclusive, which presents a serious concern if levels of transparency remain low.

The current trajectory for the more direct involvement of corporate technology continues to escalate. In fact, during 2020, Microsoft took the unprecedented step of establishing an office at the UN with dedicated teams in both New York and Geneva for the stated purposes of helping to advance the UN Sustainable Development Goals.[3] With the intent to work across different UN agencies, the dedicated Microsoft teams are uniquely positioned to influence how and where digital strategies are advanced in humanitarian settings. To date, indications point to altruistic aspirations by Microsoft on its latest investment within the UN. But, given the current digital data track record among global actors, it's worth asking—what could go wrong?


Violations of Refugee Biometric Data are Not Theoretical Anymore


In 2013, UNHCR launched the biometrics surveillance system as a significant advancement to registering refugees, creating a more durable form of identification for stateless individuals, facilitating aid, and identifying individuals who are not eligible for aid or attempting to double up on aid assistance. By the end of 2018, it was reported that the UNHCR biometrics surveillance initiative had enrolled over 7.1 million individuals in 60 countries [5].


The establishment and motivations behind the UNHCR biometric system are described as earnest efforts to address the crushing problem of organizing millions of refugees to enable aid [6]. However, a recent investigation by Human Rights Watch (HRW) among Rohingya refugees in Bangladesh confirms that despite many protocols, meetings, and discussions on digital data—the reality is that the implementation of digital data safeguards are far from meeting the needs of refugee/displaced communities [1].


Basic prerequisites to informed consent create a less than confident picture of the safeguards currently employed in massive rollouts of biometric surveillance across refugee camps. These include ensuring basic factors such as clear communication, the perceived ability to opt-out by refugees, and contradictory answers to whether biometric data would be used for repatriation purposes are in place. Instead, HRW reports that an adequate data protection impact assessment, which is protocol for UNHCR's biometric activities, were not conducted in Bangladesh. Furthermore, HRW's investigation identified a failure of UNHCR to adequately ensure Rohingya refugee biometric data did not reach the hands of the Myanmar government [1].


There is also a question of efficiency—not for aid delivery but for communities. In particular, the efficiency of the biometric systems fails to account for errors, with no human overrides to the system, as described by an early proponent of the UNCHR biometric system, who has since reversed their position [6]. In practice, this translates into a system where refugees cannot appeal a denial of aid from the system.


A second biometric data breach occurred during the US withdrawal (2021) from Afghanistan. Among one of the most detailed contact network databases for Afghan police and military, it is now thought to be in the hands of the Taliban [2]. While the Afghanistan biometric database was a failure of the US's ability to safeguard the information, it offers a profoundly alarming reality check for such databases in armed conflict settings, underscoring the inherent vulnerability of war-affected communities, which require heightened digital safeguards.


Moving Towards Digital Do No Harm


To date, digital adaptations in humanitarian communities are accompanied by several institutional and initiative policies, but ethical frameworks and principles to guide those interventions are still lacking. Digital rights can not be conceptualized as an extension of physical rights, given violations have the potential to operate on a much greater scale and, in effect, invisibly. Another distinct dimension of digital rights is in considering the right to be forgotten-- the right to remove oneself from digital spaces [7]. This takes on intensified meaning within protracted and post-conflict survival strategies for individual refugee/ displaced persons, where technology-based tracking for political violence or retribution can compromise the safety and security of individual refugees/displaced or their families.


While much of existing attention to commercial exploitation via technology is at a person level, expanding into a digital agenda must consider commercial exploitation at a systems level, where humanitarian actors may facilitate those commercial systems. Notably, the "rights of corporations" to deliver services globally are not in dispute. Instead, the distinction is whether humanitarian entities with explicit protection mandates are paving the way for corporations to position refugee/displaced communities as profitable consumers.


Further, the need for humanitarian planning cannot uniformly supersede digital data protection for communities. The honor system is inadequate in the politically complex navigation between UN entities, hosts, and repatriating governments. Data such as-- which refugees will be resettled? Which will return? And for the UN, who is attempting to receive aid more than once? -- all comprise organizationally relevant questions. But, at what point does data collection and storage create irreparable harm at systematic levels?


Moving towards actionable steps requires strategies to function independently of donor suggestions. Further, it requires moving the conversation from a technology-dominant perspective to one that incorporates humanitarian standards and ethics. An example of this includes independent humanitarian-technology panels analogous to an Institutional Review Board for medical research.


Finally, funding realities and the outsized influence of technology companies are not radically shifting anytime soon, and at the same time, new iterations of technology deployment continue. The question isn't about thwarting technology advancement in humanitarian settings. The question is who benefits from it and how -- and who is being harmed by it and how. A clear priority is establishing new technology transparency and accountability mechanisms-- directly to and explicitly for communities-- but remains the least pronounced aspect of many humanitarian technology strategies. We have to keep asking why.



References


[1] Human Rights Watch. 15 June 2021. UN Shared Rohingya Data Without Informed Consent.

[2] Guo & Noori. 30 Aug 2021. This is the real story of the Afghan biometric databases abandoned to the Taliban. MIT Technology Review.

[3] Microsoft. https://news.microsoft.com/on-the-issues/2020/10/05/un-affairs-lead-john-frank-unga/

[4] UNHCR. 2016. https://www.unhcr.org/innovation/connectivity-for-everyone/

[5] UNHCR. 2019. https://www.unhcr.org/blogs/data-millions-refugees-securely-hosted-primes/

[6] Loy. 2 Sept 2021. Biometric Data and the Taliban: What Are the Risks? The New Humanitarian.

[7] Fabbrini, F., & Celeste, E. (2020). The right to be forgotten in the digital age: the challenges of data protection beyond borders. German law journal, 21(S1), 55-65.