Democracy is frequently taken to mean that those affected by a decision have the right to vote on that decision. But this simple definition overlooks some of democracy’s other key components, namely: (i) that those affected by decisions also have the right to collectively debate and deliberate, so that they can learn from and convince others of their needs and wants; and (ii) that there exist means to mitigate the fact that democratic citizens rarely enter the deliberative arena with equal power or voice.

The Democratizing Machine Learning project seeks to incorporate this richer meaning of democratization to generate more decentralized and equitable forms of artificial intelligence and machine learning. Taking as its starting point the well-documented bias and discrimination that has been designed into algorithmic systems, this project aims to counteract these effects—an onus that has too often fallen upon individuals from marginalized communities themselves. This process entails an attempt to define the boundaries, first principles, and essential characteristics of truly democratized machine learning, and to set forth protocols for protecting privacy and minimizing the harmful effects of bias.

With $50,000 in support from Columbia’s Data Science Institute, we aim to develop this research into an initiative that surfaces policy recommendations and strategies for implementation. As we expand our Democracy & Trust program, we will continue to develop projects in avenues where mechanisms of trust, communication, and participation are not yet fixed, and where democratic principles may still prevail.