Digital America interviewed Avital Meshi in early November 2020 to discuss her work Deconstructing Whiteness (2020) and the racial impacts of facial recognition technology in our society.
:::
DigA: Between 2019 and 2020, various pieces of yours utilize the AI facial recognition system used in Deconstructing Whiteness. Where did your interest in analyzing AI system functions stem from and why do you believe this topic has been a heavy focus in your recent works?
AM: I was introduced to creative AI while studying for my MFA in Digital Arts and New Media at UC Santa-Cruz. I attended Prof. Angus Forbes’s Applied Deep Learning course, where we explored the creative, cybernetic potential of Machine-Learning and its cultural ramifications. We considered the creative outputs of different algorithms and questioned
how our awareness and thinking change when we introduce this new technology into our lives.
I came to this class following an extensive examination of identity as experienced in virtual-worlds and I was curious to explore this theme in the context of AI systems. My early experimentation with facial recognition algorithms struck me as both problematic and powerful. On the one hand, it uses external cues to assume aspects of an individual’s identity; I find this method oppressive, even violent, and redolent of the phrenological practices that were used in the past for discriminatory purposes. On the other hand, I am fascinated by the sense of individual agency that can emerge by spending time with these algorithms and learning to control the way they see us. In this respect, I consider AI algorithms as platforms for radical identity transformation and I would like to better understand the performative tools which can provide this kind of control over identity.
DigA: In your works, Techno-Schizo (2020) and The AI Human-Training Center (2020) the exploration of identity through new technologies is also prevalent. How have you, as an artist, learned more about your own identity within the context of your own work?
AM: When I look at myself through the lens of facial recognition algorithms, I’m always fascinated by the conflicting readings that emerge. For a quick moment, it can read me as “44 years old, belly dancing, scared, male,” and a moment later as “28 years old, tying tie, surprised, female.” In Techno-Schizo, I spend a lot of time looking at myself through these algorithms, examining each and every label, and trying to find my own agency within this system. My projects take the advice offered in the manifesto of the ‘New Aesthetics’, which suggests that we should avoid being passive when we are confronted with these systems. There is no reason to let these algorithms dazzle our minds by showing us a real-time
documentary of ourselves. Within my entanglement with the system, my goal is to figure out what expressions and movements are needed so that the system will see me as I wish to be seen. It doesn’t work for me all the time; with some features it is easier and with other features it is nearly impossible. This understanding made me very aware that not everyone can “trick” the algorithm in the same way. There is a divergence among people in this sense, and that makes me ponder which individuals or groups are most likely to be accurately detected and which will find it easier to avoid detection. This is something I worry about.
In The AI Human-Training Center, I invited others to become engaged in a similar exploration. Basically, the project was a participatory performance or a social event, and was taking place on a Zoom meeting, where everyone tried to control the way the algorithm captured their emotional expression. The relational aesthetic of this project allowed people to see their own classifications as well as the classifications of others. Seeing one another’s classification provided a sense of community, we were all in this together, subjected to the same algorithms and the same structures of thinking. Yet, as we participated, it became obvious that some of us could do “better” with these algorithms. I hope that the experience provoked some urgency for people, prompting them to become more familiar with this technology and to better understand the impact it might have on our lives and on our
society.
DigA: Through Deconstructing Whiteness you “examine the visibility of whiteness through the lens of AI technology.” What inspired you to examine whiteness as a construct? Furthermore, even though you specifically focused on an AI’s perspective of whiteness, do you feel this piece highlights the relationship between minority races and modern day technology as well?
AM: My exploration of racial issues is informed and inspired by my personal history and my education. My grandparents were all Holocaust survivors and I grew up listening to stories and witnessing the long-term effects of the indescribable and traumatic racist experiences they’d survived. When I immigrated from Israel to the US, I was granted an opportunity to learn more about racism outside of the Jewish experience. I still have a lot to learn. I am doing my best to pursue an anti-racist life and am educating my children to do the same. My grappling with “Whiteness” is complicated. In conversations with American Jews, I am told that since I am Jewish I am considered white. Yet, as an Israeli I have more affinity to the Middle Eastern racial category than to Whiteness. So I find myself in a sort of a racial limbo.
In Deconstructing Whiteness, I explored my personal racial visibility as seen through the AI algorithm, and I examined the confidence level by which the system recognized me as white. I revealed that there are moments in which the system detects me as “White” with almost 100% confidence level, but at other moments this certainty drops dramatically. This change was easily facilitated by changing my facial expressions and my hair-style. The outcome of this examination is a literal deconstruction of my own racial visibility. There is also a fascinating manifestation of identity as a flow of probabilities as opposed to a simplified, white/non-white dichotomy. Still, this project does not imply that my experience applies to everyone. It is critical to examine who can trick the machine and who can’t; who is detected accurately and who is not. This point is perhaps one of the most important focuses of my work and it demonstrates the harmful potential of facial recognition technology (as it is currently designed) to perpetuate racism and impact people’s lives in an unequal manner. This understanding coincides with the important work of the digital activist Joy Buolamwini, who researched intersectional accuracy disparities in AI visibility of race and gender and demonstrated it in commercial classification algorithms.
DigA: Any art addressing race in 2020 is seen within the context of the murder of George Floyd and the resulting Black Lives Matter momentum. How does Deconstructing Whiteness respond to BLM as a social movement, and was the piece conceptualized before or after the protests began?
AM: Deconstructing Whiteness was conceptualized a few weeks before the murder of George Floyd. This awful event and the resulting BLM momentum encouraged me to complete the first version of this investigation and to start showing it to others. I am interested in continuing to develop this work by inviting others to explore their own racial visibility, too.
This piece is not the first one in which I explore racial visibility, but I must admit that each time I touch this theme I need to harness my confidence before I present my work. Deconstructing Whiteness was released at a very sensitive moment and I wasn’t sure how people would respond to it. With some of my previous race-related work there were some harsh discussions and discouraging critiques. There were moments I was told I do not know enough to participate in the conversation. When something like this occurs, I step back, I ask questions, I read, and I do my best to learn, but I always try to come back to the conversation. Even if I make mistakes and even if the work can be further developed, my intentions are anti-racist and I find this conversation too important to watch from the sidelines. I strive to become an ally to those who suffer from racism.
DigA: Deconstructing Whiteness shows us how AI attempts to determine the percent of an individual’s race through appearance along with how technological systems can be used to visually profile. Although AI systems are not inherently privy to racism, it is something coded and embedded due to the inherent biases of the coder. This opens up a further discussion around segregation and stereotyping bleeding into AI functions. Through the exploration of race and technology implemented within this piece, how do you hope to draw attention to AI and racial profiling, and what are your hopes and fears for the future of this?
AM: It is true that the system in itself is not racist and that we as people embed our inherent biases into it. However, I think that it is too simple to just point a blaming finger at a coder with ill intentions. People who write these algorithms are mostly computer scientists who use their skills to advance the technology. I expect that most of these scientists are not trained to think in terms of social and cultural justice and therefore lack the capacity to socially critique their own work. I claim that the racism that we see in the code is a reflection of biases that we are all responsible for as a society.
As an artist working with this media, I assume the role to raise awareness regarding the incongruence between the technology and the social environment it resides in. My hope is that tech companies and computer scientists will collaborate with people from the humanities to seriously consider issues of fairness before releasing algorithms which can impact people’s lives. AI technology is a new technology, but it is developing fast. Yet the level of awareness regarding fairness and accountability is growing, too, and hopefully it will soon catch up with the technology.
I also hope that people will not rely on these algorithms before they are proven to be reliable. Therefore, I totally support the decisions of cities like San Francisco, Oakland, and Berkeley to ban facial recognition algorithms from public use. The city of Portland even banned it from private use, which I think is a smart move.
Lastly, since this technology is not going to go away, I hope that individuals would do their best to become familiar with it and learn how it might impact their lives. I hope that we can find ways to prove to ourselves that we are not powerless when we are confronted with these systems.
:::
Avital Meshi is a California based New Media Artist. She creates interactive installations and performances, which invite viewers to become engaged with new technologies in an unusual way. The entanglement between the body and technology reveals unique aspects of identity and social connections. It provokes conversations regarding role-playing, identity tourism, cultural appropriation, virtual life, and artificial intelligence. Meshi holds an MFA from The Digital Arts and New Media program at UC Santa Cruz and a BFA from the School of The Art Institute of Chicago. She also holds an MSc in behavioral biology from the Hebrew University of Jerusalem.