To fight tech giants and law enforcement, organizers must adopt “data defense” as a core strategy—learning to collectively refuse data extraction, expose biased narratives, and reclaim personal and community information as a new frontier of political power.
Unprecedented abuses of personal data by tech companies and federal agencies to support law enforcement have raised new concerns among the general public about what happens when our personal and community data falls into the wrong hands. Companies like Palantir, which have won rigged Army and ICE contracts, have heightened people’s awareness of our government’s failures to legally protect our data from misuse. At the same time, many of our communities lack awareness or the tools to fight back against surveillance and unlimited data extraction that fuel systems of power.
As a data justice educator, I’ve spoken to dozens of community members and organizers who feel overwhelmed by the outsized influence of technology companies in our society, and especially by the ways they enable law enforcement to monitor our lives. But institutions of power don’t only use surveillance technologies to monitor us. They also collect information about us, our bodies, and our movements through various data systems that we interact with every day.
Personal data = Any information that relates to an identifiable individual. Different pieces of information, collected together, can lead to the identification of a particular person, and also constitute personal data.} From the European Commission.
Community data = A new definition I’m proposing to capture data that pertains to neighborhood assets, infrastructure, behavioral patterns, movements, natural resources, lands, or places under the collective stewardship of a community.
They also buy and sell our personal and community data through extremely profitable data markets. About 98 percent of Meta’s revenue in 2023 came from advertising that relies largely on personal data, which affects important aspects of our lives, like rent prices or insurance payouts. Fighting back against data extraction goes beyond dismantling surveillance technologies themselves and requires a political awareness of the way that systems of oppression view and manipulate us using data as their tool.
As we begin to uproot the systems that steal our data to spread unjust data narratives, we can learn from Indigenous Data Sovereignty scholars and activists, who have been the first to understand institutional data systems as colonial tools, and the first to present meaningful alternatives rooted in land-based knowledge systems. By fighting to protect our own data rights at the community level, we can also be better allies to Indigenous Data Sovereignty movements.
Understanding data rights as a framework
Learning about data rights is the first step to identifying meaningful and strategic opportunities for communities to put a wrench in the gears of the data collection machine. Organizers need new strategies, grounded in data rights, to fight for our future. From housing to food to schools to borders, data is being systematically extracted from our communities through surveys, administrative data, and educational programs. Often, data collected through these channels carries a top-down point of view, feeding into stories about us that reinforce harm against our communities and ultimately serve oppressive agendas.
For example, based on their track record, we can already assume that Palantir will use data to generate criminalizing narratives about immigrants and political actors. Their work builds on the US government’s long history of circulating data-backed criminalization narratives about Black communities to justify mass incarceration and other forms of organized abandonment.
These narratives were coined “deficit narratives” by Indigenous Data Sovereignty scholar Maggie Walter after studying colonial data practices affecting Aboriginal communities in Australia, specifically referring to data narratives that focus on Difference, Disparity, Disadvantage, Dysfunction, and Deprivation. Unjust data narratives can come back to haunt us, sticking us with labels like “disadvantaged,” “at-risk,” or “recidivist” that carry no real meaning beyond the institutionalized metrics they represent.
So what rights do we have to our personal and community data? The short answer is that anyone represented in a data system has inherent rights as a data constituent.
Data rights include:
our right to consent to data collection;
right to control access to personal data;
right to refuse third-party data sales;
right to access all government data;
right to participate in interpreting data where we’re represented;
and, right to opt-out or be deleted from data systems.
Our understanding of these rights must expand as data systems evolve; however, we must start exercising our rights to understand their limits. Because the data that systems of power are using to determine our future fundamentally come from us — our bodies, our movements, our relationships, our neighborhoods, and our lands — and we have a right to defend our data.
Data defense as a strategy
Data defense is a strategy based on our fundamental rights to our personal and community data. Data defense should be a tool in every organizer’s toolkit, allowing us to identify points of data extraction and organize people to resist or influence the way data about us can be used. To build a data defense strategy, organizers must become familiar with the ways systems of power wield data and identify ways to expose the flaws in the stories that these systems create with our data.
You don’t have to be a data scientist to begin questioning how data is collected and used. The first way to determine whether data systems are just or rights-preserving is to examine the data’s bias. Bias reveals a person’s perspective and how it shapes the data and its stories. In traditional data science education, we’re taught to fear bias, to minimize it, or to explain it away. But our most widely accepted statistical methods in the West were embedded with bias to begin with, when they were designed by eugenicists for the purposes of separating racial populations to prove the superiority of the “white race.” The truth is that all data is biased, and unpacking that bias can expose the positionality and power of the people who collected it.
As organizers, we can begin educating our communities about the flaws in top-down practices of collecting, interpreting, and using data. We can start pointing out when governments claim their data is completely true and unbiased, but refuse to accept community narratives due to their “lack of statistical significance.” And we can start to build data systems and stories that reflect our own truths and experiences. For example, grassroots archiving projects like Black in Appalachia’s Community Histories Project & Digital Archive show us that structuring data around our own stories can create space for collective interpretation to reveal buried histories and collective embodied truths.
To build collective power to resist data extraction by systems of power of all kinds, communities can self-organize in similar ways to other political terrains, thereby building political leverage. Just as labor unions help people bargain for better working conditions, and just as land trusts help neighborhood residents regain control over community land, neighborhood data boards or similar cooperative structures could represent a new form of collective bargaining for data at the local level.
Collective refusal, or data strikes, not only have the potential to influence government data collection practices but may also have the power to hamstring the tech industry and its main source of profit. The advertising industry has basically become synonymous with the personal data industry, driven by unethical data extraction. At the beginning of 2025, it was poised to exceed $1 trillion in revenue for the first time thanks to the Big Five tech companies, which account for over half of the world’s advertising revenue.
Many communities already practice refusal when it comes to government data collection — a well-documented phenomenon, as evidenced by Census nonresponse rates. Community members who are surveyed year after year about their lack of access to housing, food, transit, education, or other life-sustaining resources can refuse to share their data until conditions improve. These collective refusals should also be accompanied by clear statements of intent and tools that leverage this collective power to effectively hold valuable community data hostage and make demands before more data can be extracted.
Communities skilled in data defense might learn to ask the right questions when someone asks to collect their data: Does the person collecting or using this data have my best interests at heart? Does the system that this data will reinforce ultimately serve me? Are there policies in place to protect my rights as a data constituent? Can I ask to be removed from the data later?
Building a movement around data justice
While there are many collaborators and co-conspirators exploring questions of data justice, organizers have consistently approached me with an overwhelming desire to explore emerging questions of data justice in grassroots contexts. For example, how do we collect data from our own community in ways that preserve the traditional knowledge or lived experiences of our people? And how do we communicate this data in spaces of institutional power while protecting our peoples’ privacy? When is the right time for a collective mapping effort, and when are stories enough? How do we fight back against unjust narratives that governments are creating about us when they have all the data?
Data justice is a space rich in opportunities for grassroots experimentation. The People’s Data Project is building frameworks and resources to share collective learning about how we can fight for our data rights and the role data justice can play in our movements. The Project also seeks to give organizers the language and tools to become data justice educators in their own communities.
Technologists have an important role to play in helping us experiment with justice-driven data tools of the future. However, without data rights as a framework, technologists with internalized habits of gatekeeping and overvaluing of institutionalized data practices tend to emerge and overpower community approaches to knowledge-keeping. Ultimately, our explorations of data justice have to involve a fundamental reimagining of our collective ways of knowing.
Because of corporate and institutional systems’ increasing reliance on data, even small blocks in the pipelines of unlimited data extraction could generate change. But it has to start with individual and collective political awareness of our power to reclaim our data and a strategic awareness of the points where we can be effective in doing so. Through a million experiments in data justice, we can begin to undermine the unchecked power of harmful systems to extract as much data as they want without consequence. We can begin to build a data future of our own.