Biometric Surveillance Failures: Where is the Digital Do No Harm in Humanitarian Settings?

The modern version of digital do no harm has yet to be meaningfully addressed in humanitarian settings in earnest—frequently relegated to the corners of digital security and technology as a limitation of current systems. But, in the last six months, two violations of biometric data breaches among Rohingya [1] and Afghan [2] refugees underscore the urgent need to bring the digital data security conversation back to fundamental bioethics and humanitarian principles. At the same time, Microsoft is quite literally moving into the UN [3].


Attaining meaningful digital protection for refugees and displaced communities will not be found in another rapid checklist or discrete technology fix. Instead, it requires a deeper level of inquiry into the role of humanitarian-technology partnerships and where data is positioned within humanitarian organizational culture —the de facto connector between stated guiding principles and the applications of those principles into action.


Data About the Refugee Communities Versus Data For Refugee Communities


Data has slowly transitioned into a key dynamic of humanitarian organizational culture. As a key element to justify funds and in turn, existence—it has been increasingly inculcated into humanitarian organizational culture over the last thirty years. However, the lines tend to cross often in terms of what data from refugee/displaced communities is actually being collected and for what purpose.


Specifically, data to increase service utilization is often characterized as data to improve service delivery, and in turn, prioritizing communities. Yet, at the same time, it commonly serves a donor reporting and donor accountability purpose. Given constrained resources, isn’t increased efficiency a good thing? Yes, but not if it comes at the expense of the stated mandates to humanitarian communities. The duplication of data collection rationales translates into an investment, and motivation in data systems and priorities will always inherently skew towards donors given the ongoing need to perform for continued funding.


In itself, data-driven decision-making is not only uncontroversial but yields multiple benefits. But, the question is not whether data is valuable but whose data is being collected, for what purpose, and does this adhere to humanitarian principles to do no further harm? In operational terms, the danger for humanitarian communities lies in the translation of humanitarian principles applied to data priorities and protections for communities.


When it comes to digital data, indications strongly indicate similar trends are extending into digital data collection and usage. The benchmark has moved from data systems reinforcing power inequities in a pre-digital data era to reinforcing power inequities and facilitating additional harm to communities in a digital data era.


The New Digital Partnership Era


UN-corporate technology collaborations have been characterized as driving a new form of partnerships to meet humanitarian goals. UNHCR staff referring to the Connectivity for Refugees Initiative described these partnerships as just that in 2016 [4] ---


“ Connectivity for Refugees is forging new partnerships and seeking smart investments, with companies from Mobile Network Operators and telecommunications businesses to technology giants like Microsoft, Google, and Facebook. But the partnerships are not forged in the typical model. We can’t assume that the private sector is waiting to write checks to give us sums of money. What they want to do is engage with us. They want to help solve the problem together; they want to apply their expertise and knowledge to the problem with us.”


There are two sides to this non–typical partnership characterization: innovation and what could arguably be described as a more proactive form of corporate philanthropy. Still, the other is creating more commercial inroads while contributing to social good. Notably, the two are not mutually exclusive, which presents a serious concern if levels of transparency remain low.


The current trajectory for the more direct involvement of corporate technology continues to escalate. In fact, during 2020, Microsoft took the unprecedented step of establishing an office at the UN with dedicated teams in both New York and Geneva for the stated purposes of helping to advance the UN Sustainable Development Goals.[3] With the intent to work across different UN agencies, the dedicated Microsoft teams are uniquely positioned to influence how and where digital strategies are advanced in humanitarian settings. To date, indications point to altruistic aspirations by Microsoft on its latest investment within the UN. But, given the current digital data track record among global actors, it’s worth asking—what could go wrong?


Violations of Refugee Biometric Data is Not Theoretical Anymore


In 2013, UNHCR launched the biometrics surveillance system as a significant advancement to registering refugees, creating a more durable form of identification for stateless individuals, facilitating aid, and identifying individuals who are not eligible for aid or attempting to double up on aid assistance. By the end of 2018, it was reported that the UNHCR biometrics surveillance initiative had enrolled over 7.1 million individuals in 60 countries [5].


The establishment and motivations behind the UNHCR biometric system are described as earnest efforts to address the crushing problem of organizing millions of refugees to enable aid [6]. However, a recent investigation by Human Rights Watch (HRW) among Rohingya refugees in Bangladesh confirm that despite many protocols, meetings and discussions on digital data—the reality is that the implementation of digital data safeguards are far from meeting the needs of refugee/displaced communities [1].


Basic prerequisites to informed consent, including clear communication, the perceived ability to opt-out by refugees, and contradictory answers to whether biometric data would be used for repatriation purposes -- create a less than confident picture of the safeguards currently employed in massive rollouts of the biometric surveillance across refugee camps. Unsurprisingly, it is reported that an adequate data protection impact assessment, which is protocol for UNHCR’s biometric activities, were not conducted in Bangladesh. Furthermore, HRW's investigation identified a failure of UNHCR to adequately ensure Rohingya refugee biometric data did not reach the hands of the Myanmar government [1].


There is also a question of efficiency—not for aid delivery but communities. The efficiency of these systems appears to fail to account for errors, which leave refugees without any ability to appeal a denial of aid for human overrides to the system, as described by an early proponent of the UNCHR biometric system, who has since reversed their position [6].


A second biometric data breach occurred during the US withdrawal (2021) from Afghanistan. Among one of the most detailed contact network databases for Afghan police and military, it is now thought to be in the hands of the Taliban [2]. While the Afghanistan biometric database was a failure of the US's ability to safeguard the information, it offers a profoundly alarming reality check for such databases in armed conflict settings, underscoring the inherent vulnerability of war-affected communities, who require heightened digital safeguards.


Moving Towards Digital Do No Harm


To date, digital adaptations in humanitarian communities are accompanied by several institutional and initiative policies, but ethical frameworks and principles to guide those interventions are still lacking. Considering critical conceptual gaps between physical and digital rights and the violation of those rights are essential to ensure digital data protections are meaningful for refugee/displaced communities. For instance, the right to be forgotten-- the right to remove oneself from digital spaces [7] -- takes on an intensified meaning within protracted and post-conflict survival strategies for individual refugee/ displaced persons.


While much of existing attention to exploitation is at a person level, expanding into a digital agenda must consider exploitation at a systems level. In particular, it must consider potential exploitative elements of digital systems that humanitarian actors may facilitate, regardless of their intention. Notably, the "rights of corporations" to deliver services globally are not in dispute. Instead, the distinction is whether humanitarian entities, with explicit protection mandates, are paving the way for corporations within refugee/displaced communities.


Further, the need for humanitarian planning cannot uniformly supersede digital data protection for communities in the complex navigation between UN entities, host, and repatriating governments. Data to answer questions such as-- which refugees will we be letting in? Which will return? And for the UN, who is attempting to receive aid more than once? All fall into the category of organizationally relevant questions. But, at what point does data collection and storage create irreparable harm at systematic levels? Does the Taliban have the capacity to retrieve and act on US collected biometric information? Such a question should never be on the table in any setting when it comes to a hostile group or government -- the assumption should always be affirmative in 2021.


Moving towards actionable steps requires strategies to function independently of donor suggestions and requires attention and effort from technology and non-technology perspectives. Independent humanitarian-technology panels, analogous to an Institutional Review Board for medical research, would facilitate a transition to such an independent space.


Finally, funding realities and the outsized influence of technology companies are not radically shifting anytime soon. However, what can shift are systems of transparency and accountability to maintain the integrity of digital systems serving humanitarian communities. Given this, it is significant to note that these questions are not intended to thwart technology advancement in humanitarian sectors. Instead, they are essential to shift digital systems in these settings from amplifying risk to amplifying benefits.



[1] Human Rights Watch. 15 June 2021. UN Shared Rohingya Data Without Informed Consent.

[2] Guo & Noori. 30 Aug 2021. This is the real story of the Afghan biometric databases abandoned to the Taliban. MIT Technology Review.

[3] Microsoft. https://news.microsoft.com/on-the-issues/2020/10/05/un-affairs-lead-john-frank-unga/

[4] UNHCR. 2016. https://www.unhcr.org/innovation/connectivity-for-everyone/

[5] UNHCR. 2019. https://www.unhcr.org/blogs/data-millions-refugees-securely-hosted-primes/

[6] Loy. 2 Sept 2021. Biometric Data and the Taliban: What Are the Risks? The New Humanitarian.

[7] Fabbrini, F., & Celeste, E. (2020). The right to be forgotten in the digital age: the challenges of data protection beyond borders. German law journal, 21(S1), 55-65.