Gender Glitch Matrix: Subversive Media Archives
Machine Learning | Digital Media
as part of the MIT Architecture SMArchS Computation thesis:
"Gender Glitch Matrix: Queer Aesthetics and the Politics of Error in Digital Media" 

Project: Merve Akdogan
Advisors:  Azra Akšamija, Larry Sass
Reader:  Panagiotis Michalatos
Time: May 2024

Step 1 - Abnormalizing the Matrix
The main objective of this first step is to generate insights into how discrete modifications to model parameters influence the visual output during a machine learning model response optimization—going back to Judith Butler's concept of the "Matrix of Intelligibility," which refers to the social frameworks that define and regulate the recognition of identities, particularly gender identities, within society. The idea of playing with parameters and modifying the matrix of the parameters in the algorithms is based on this theory, suggesting that by altering the social and cultural parameters that constitute the matrix, it is possible to challenge and expand the boundaries of what identities are considered intelligible or legitimate. All matrices inside a neural network are cultural and social, reflecting society's shortcomings and biases. This is where the "Gender Glitch Matrix" comes into play. Optimization in machine learning models refers to adjusting the network's parameters (such as weights and biases) to minimize specified loss functions. Although it utilizes the same underlying methods, this technique can be employed in two distinct ways. The first approach involves optimizing the network parameters during the learning or training phase, which aims to improve the model's ability to predict or classify data accurately. The second approach, alternatively, focuses on optimizing the input to the model.
This method effectively transforms non-generative models into generative ones by tweaking the input data to produce outputs that are maximally aligned with desired outcomes. This study's primary objective is not to achieve a balanced and finely-tuned model but to disrupt its operation by randomly modifying parameters that influence the visual outcomes. This is done to analyze the model's behavior under specific conditions. Using classifier networks designed to recognize objects within images and are trained on public datasets, this approach reflects embedded biases in the computational systems regarding recognizable features due to their training datasets. The technique involves introducing localized damage to these parameters, a process is similar to catastrophic forgetting, where the networks gradually lose their ability to recognize objects. This damage is analyzed by shifting the model's task from recognition to searching for an image most closely resembles a given concept. This method serves as a way to dissect the "black box" by altering parameters within the matrix and observing the resultant changes in behavior, effectively using optimization to gauge the extent of the damage.
Neural networks has layers that the data is passing through, and all of these layers have millions of parameters inside. The technique I used was slicing some layers, than picking a random parameter, doing some operations like changing the sign, adding noise, scaling, dropping weights, swapping weights and such. I tested this methods in four different neural network architectures. ​​​​​​​
Going back to Judith Butler's concept of the “Matrix of Intelligibility” which refers to the social frameworks that define and regulate the recognition of identities, particularly gender identities, within society. The idea of playing with parameters and modifying the matrix of the parameters in the algorithms is based on this theory, suggesting that by altering the social and cultural parameters that constitute the matrix, it is possible to challenge and expand the boundaries of what identities are considered intelligible. All of the matrices inside a neural network are cultural and social which reflects shortcomings and biases of the society.​​​​​​​
To test the effect of manipulating different layers in a neural network, I used a basic circular graphic with the prompt "butterfly." Without any manipulation to the layers, we can see that the model tries to maximize the possibility of the image being a butterfly.
If the first layers are manipulated using random initialization techniques, the model loses its ability to detect features and produces blurry, glitchy visuals.
If the middle layers are randomized using scaling and weight-dropping techniques, the model loses its ability to understand patterns and generates random patterns instead of a butterfly.
Finally, if the last layers are manipulated using scaling and weight-dropping techniques, the model loses its semantic ability and starts hallucinating different scenarios, creating various attributions.
We explored various ways of creating glitches. I refer to these techniques as blur disruption, pattern disruption, and semantic disruption, and will use them for glitching the visuals in the next experiment.
Step 1 - Subversive Media Archives
In this experiment, the goal is to manipulate media images that contain stereotypical visual representations, such as magazine covers. In this zine example, you can see an advertisement that has been subverted by the zine-making technique that causes manipulations. Similarly, in digital systems, the “glitch” technique corresponds to the “zine-making” technique. 
As a first step, I collected 100 magazine covers, both old and new, to glitch the images with the semantic disruption technique explored in Step 1. 
After applying the semantic disruption technique, which involves randomizing the last parameters, the model experiences a semantic glitch. This causes it to hallucinate and fail to closely align with the prompt, resulting in semantic disruptions.
Using the prompt 'butterfly,' the model hallucinates results that are completely different from the original prompt, revealing the model's biased perceptions through the content of the image.
Back to Top