Non-discrimination requires joint measures to prevent, reduce and prepare for disasters and to distribute relief and promote recovery. It also guarantees the security of fundamental rights, without regard for gender, sexual orientation, race, colour, language, religion, political or other opinion, ethnic group, socioeconomic circumstances, disability, age or other status. New technological tools based on innovations such as big data, predictive algorithms, AI, run the risk of intensifying existing discriminatory inequalities or introducing new ones, hence careful consideration of their ethical, social and legal implications is necessary.
- Secure equality of fundamental rights of users and those being supported.
- Provide mechanisms to support the freedom to express different points of view.
- Ensure that interactions are culturally sensitive.
- Avoid any decisions that lead to inequality based on race, ethnicity, religion, gender, age, disability, or sexual orientation.
- Be attentive of any unconscious, institutional or algorithmic biases that might be inadvertently introduced to or intensified by your practices.
Further information
The prohibition of discrimination regardless of sex, race, colour, language, religion, political or other opinion, national or social origin, association with a national minority, property, birth or other status is one of the key principles of both the EU’s Charter and Convention of Human Rights.
In the digital age, innovations such as big data and AI have allowed decision-making algorithms to enter all aspects of daily life. However, algorithms too can be biased and discriminatory. For example, Eubanks shows how such technologies can exaggerate inequalities as her work brings to light the impacts of data mining, policy algorithms, and predictive risk models on poor and working-class people in America (2018). As the ethics advisory group of the EDPS (2018: 18) states ‘novel forms of algorithmic discrimination pose a risk to equality of opportunity and to the fundamental right to be protected against digital networks that offer a wealth of often free and accessible information’.
In disaster risk management, the ongoing ‘informationalisation’ and ‘datafication’ along with the advent of social media and digital humanitarianism urge us to consider the digital inequalities and technological discriminations that such new practices can bring along. These challenges range from understanding that those you are usually most in need of support might be the ones who have least access and understanding of these technologies (Murthy 2011 a, b) all the way to considering how the development of technological innovations in humanitarian response, such as drones, Big Data, etc. might be reliant on existing inequalities between a tightly-regulated and privacy-sensitive global North and a mostly unregulated global South (see Taylor and Broeders 2015).
Sources
Berners Lee, T. (1998) The World Wide Web: A very short personal history. [Link]
Council of Europe (1950) European Convention on Human Rights. [Link]
EDPS (EU Data Protection Supervisor) (2018) ‘Towards a digital ethics’, Ethics Advisory Group Report [Link]
EU Race Directive (2000/43/EC)
EU Framework Directive (2000/78/EC)
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press
UNISDR. (2015). Sendai Framework for Disaster Risk Reduction – UNISDR. [Link]
Murthy, D. (2011a). New media and natural disasters: Blogs and the 2004 Indian Ocean tsunami. Information, Communication and Society, 1, 1–17. [DOI]
Murthy, D. (2011b). Twitter: Microphone for the masses? Media, Culture and Society, 33(5): 779–789 [DOI] [Link]
Prieur, M. (2011) Ethical Principles on Disaster Risk Reduction and People’s Resilience. European and Mediterranean Major Hazards Agreement (EUR-OPA). [Link]
Taylor, L., and Broeders, D. (2015). In the name of Development: Power, profit and the datafication of the global South. Geoforum, 64, 229-237. [DOI] [Link]