4 Comments

  1. AI is being integrated and used in several aspects of our life, but the real driving force that motivating the integration of AI in our lives in fueled by power, authority and profits. We see examples of tenant screening system and AI trained on data subjects that are exploited by authorities and people who design these systems. In the case of tenant screening companies, such as RealPage, they don’t even have to register with the government agencies to carry out their operations. These companies use and deploy systems that are oppressed the marginalized groups, and favor groups that have power. Name Matching algorithm used by RealPage is one such example where the algorithm can show errors with predicting common names- particularly members of minority groups which have more common last names. The company’s business model is designed to read and provide maximum tenant screening – these screenings not necessarily being accurate- to generate maximum revenue (company earns approx. 48$ million annually).
    This brings in the need to have fair AI systems that are monitored and held accountable by the people. These fair systems should not be fulfilling an organization’s (or individual’s) personal agenda, but should be fair and transparent to the people it is built for. These fair AI systems should represent a good proportion of the target human population – particularly representing marginalized and oppressed communities. Most of all, these AI systems should be monitored overlooked by responsible authorities (with authorities not only in power).
    There is a need to shape AI in a more ethical manner. The question is not about how powerful AI is, or to how much extend it will control our lives; but its about how we can shape AI to represent voices of people, and make it more feasible for them.

  2. The readings bring in some themes we’ve discussed in class before, specifically the politics of artifacts and the Human & Machines in the Loop algorithm.

    First, the built-ins of technologies most of the time produce some sets of consequences that are intellectually and practically important. But these features as well as the consequences are often overlooked and not well studied. Why? Because technological invention is fatefully tied to corporate profit, and thus it is solely viewed in light of efficiency, cost-cutting and modernization – without consideration for the significant social impacts that it always entails. Innovations are much of the time the embodiment of a social order in which certain social interests – of the corporates, of the more dominant players, etc. – are favoured over another. Kalluri (2020) demonstrates this clearly in view of the surveillance AI, where the focus is never on the data subjects – people who are tracked often without consent and even people who build the algorithms. The reason why developers have trouble viewing their products as reinforcing inequity is precisely because the successful development of any technology is tied to the powerful’s interests – corporations, decision makers -, masked in terms of efficiency and being mathematically useful.

    The corporate profit again comes into explaining why background screening companies refuse to add a human in the loop, which would undoubtedly reduce the costly errors they make on tenant screening. The court records are made cheaply and easily accessible, and the cost for lawsuits and reconciliation is marginal compared to the cost of adding humans or fixing the algorithms. The data subjects’ interests are not put under consideration, simply because it merely affects the companies’ profit. The cost on the data subjects is, however, realistically hefty – how much does it cost someone to lose their place in the housing priority list in this day’s context?

    It’s intriguing to see how interconnected errors with AI are with social injustice issues. I wonder where we could start to change this status quo, though, because it requires revolutionizing our very primitive idea of what is an efficient and good technology.

  3. As Pratyusha emphasized in her first article, there is no good or bad AI per se; it depends mainly on how users want to distribute power. Many of the systems that already exist nowadays ignore the needs of minority groups. This is because these systems focus more on global predictions and effects, such as Twitter and Facebook. And the question of rights attribution can determine whether the AI is used for good or bad results. For example, in the article, an AI that can accurately recognize faces is given to an autocratic and oppressive government. That would be a disaster for the people. So AI itself may not be right or wrong, but it depends on the user’s purpose. In the second article, the authors describe how a flawed screening system by a background check company harmed a group of innocent renters due to the AI filtering system. AI systems are inherently designed not to avoid the creation of Bias, so many essential decision-making actions still require human intervention. Human intervention is not only to correct errors but also to give the AI system a chance to learn again. With the information received through reinforcement learning, the AI system can further improve its accuracy. The phenomenon in Article 2 also serves as a cautionary tale for future AI system development. After an AI system is deployed, someone needs to evaluate its performance and also needs staff to handle exceptional cases.

  4. What I found most interesting was how issues with AI are merged with social issues. Many of the issues described in the articles show how AI exacerbates existing social issues. For example, the facial recognition systems used by law enforcement are biased towards POC (people of color) which is a reflection and exacerbation of existing bias and discrimination. I saw something similar in a documentary on Netflix called Coded Bias where current AI facial recognition systems recognize black faces significantly less accurately than white faces.

    What concerns me is that these issues are becoming more known (and people are shedding light on other examples as well) but I don’t think there is much change being made. Like in the tenant background check article, people need to care more and pressure needs to be placed on governments to create more legislation around the training of such algorithms and the identification, categorization, and labeling of data.

Leave a Comment

Your email address will not be published. Required fields are marked *