The Detection of Xenophobic Language and Misinformation in Media Content project was a collaborative effort conducted between UNICC, UNESCO, IOM, and New York University SPS Capstone participants from 2024. The rise of xenophobic language and misinformation in media narratives, particularly those involving migrants, refugees, and displaced communities, has prompted the need for tools that promote balanced, fact-based journalism that respects the rights and dignity of vulnerable populations. While negligent content amplifies harmful stereotypes and false narratives, manually screening for such content is costly, slow, and prone to error. In this context, UNICC collaborated with the NYU School of Professional Studies (students & faculty) to develop a comprehensive data labeling approach aimed at categorizing information based on its tone and intent. Together, we established the following classification criteria: “toxic” : Content containing generally harmful or offensive language intended to provoke or hurt. “severe_toxic” : Highly aggressive or extreme language with intense hostility or derogatory tone. “obscene” : Language that includes vulgar or sexually explicit content inappropriate for public discourse. “threat” : Statements expressing intentions to cause harm or incite violence against individuals or groups. “insult” : Content that demeans or ridicules someone based on personal characteristics or affiliations. “identity_hate” : Hate speech targeting individuals or groups based on identity markers like race, ethnicity, religion, or nationality. The project aimed to build an AI-based media analysis tool to identify and mitigate xenophobic language, misinformation, and harmful narratives in media coverage to address these ethical challenges of reporting on human mobility by fostering informed and unbiased journalism. The primary goal is to create a robust AI tool that leverages advanced language models to detect harmful content, ensure ethical reporting, and support media outlets in providing balanced narratives about vulnerable communities.
Activity Type AI Tools/SolutionsResearch/Reports/AssessmentsThis project integrates OpenAI tools into the Displacement Tracking Matrix (DTM) core systems across three main areas: semantic search in the Central Data Dictionary, retro-fitting historical data to a new DTM DataLake, and a DTM Chatbot for non-technical querying of data systems. The project also includes remote Building Damage Assessment by combining satellite imagery with machine learning models and tools.
Activity Type Infrastructure/Systems DevelopmentTechnical assistanceResearch/Reports/AssessmentsAI Tools/Solutions