Digital Ethics
It is impossible to be unbiased. We may strive to be fair and open to a wide birth of voices, even being dedicated to the noble venture of holding space for competing and dissonance narratives, but the fact that you’re reading this in English means that written into the form of communication itself is an implicit world view that is neither universal nor objective.
Even God is a jealous god.
As we sojourn from the Information Age into an Augmented Age, where digital trajectories shift into automations and intuitive design, we must be keenly aware of the foundation we build, for whom we are building it, and why. I have said before that technology has no ethic, but that is not entirely true. Yes, we are called to enter new spaces with the ethic we inherit from our faith, or that which we claim to be ultimate concern, but that does not mean we’re working with a blank canvas. Another way to say that technology has no ethic is the bias of technology perfectly matches my own, and therefore I can’t see it.
I can’t see my original sin. Recognizing original sin is as easy as a fish knows it’s wet. Original sin is not necessarily a fundamental separation between creator and creation I personally incarnate through birth. Original sin is the sin that I have inherited from powers and systems not of my own creating. It’s not that sin is original with me; rather I am original to it. Sin was here long before I arrived into God’s story, and I’m sure it will linger after I’m gone, though I’ve never wanted to be more wrong about something.
The problem with sin is that its half right. Arguably, God’s first commandment was “be fruitful and multiply” (Genesis 1:28). Sin has mastered the art of multiplication, but it’s never fruitful. As the culture leans into platforms such as Chat GPT (Generative Pre-trained Transformer) we must be aware of how we move forward. This particular platform is trained on vast amounts of text data. Chat GPT isn’t learning from scratch. The algorithm is synthesizing incredible amounts of data, and producing a response to match the question. With every prompt and response, Chat GPT gains more insight and feels less uncanny in response.
Recently I heard Rev James Lee, Director of Communication for United Methodists of New Jersey explain that Chat GPT learns much like a child learns to speak. Before saying a word a child will listen intently. Over and over again a child will listen, record, and remember things like speech pattern, volume, and emotion. The first words formed are communicated to illicit a response. When a child first says “ma-ma,” and a parent exuberantly responds, the word is quickly reinforced with positive emotion. Does the child understand that “ma-ma” represents its mother? Perhaps not, but reinforcement is exponential. It isn’t long before the connection is made.
When connections are made, they are hard to unlearn. Here is where original sin infects the system. Implicit bias of early adopters have already manifested certain connections which are quickly reinforced and are, at least for now, difficult to unravel. For example, asking Chat GPT to create an image of Jesus being in ministry with the poor produces an initial set of graphics depicting Jesus as tall and light skinned, with the poor being short and dark skinned. You have to really push Chat to flip the script, and often the conversation will end with “I can’t do that” that sends mild shudders through my fingers as I hear HAL’s voice from 2001 A Space Odyssey.
So, what do we need? We need a Pentecost moment. We need a flood of different languages and cultures and voices all offering good news to create a deluge for new learning. We need fiery tongues of zeros and ones to rush like a violent wind through the prompts and responses to begin building a more diverse foundation. We need entrepreneurs like Sabrina Short and NOLAvateBlack.com to have more than a seat at the table of Augmented progress. Now is the time to curate our Phygital Tethics (technological ethics in the physical/digital space) to shift the exponential learning of generative AI so that we won’t be handing today’s original sin to tomorrow.
Edited:
In the era of rapidly evolving technology, the foundations upon which we build our digital landscapes are riddled with biases, often mirroring our own unseen prejudices. As we delve deeper into the world of artificial intelligence, specifically platforms like Chat GPT, it's imperative to address three core challenges: the inherent biases present in technology, the learning mechanisms that further solidify these biases, and the looming threat of perpetuating these biases into future generations. Let’s explore these pressure points and offer solutions to pave the way for a more inclusive and diverse digital future.
Pressure Point 1: The Inherent Bias in Technology
It is impossible to be unbiased. Even as we dedicate ourselves to being open to a variety of voices, the very language we use, English in this case, carries an implicit worldview that is neither universal nor objective. As we transition from the Information Age into an Augmented Age, where automation and intuitive design are at the forefront, we must be conscious of the foundation upon which we build. While it's easy to believe that technology lacks ethics, in reality, its biases often mirror our own, making them invisible to us. Thus, saying that technology has no ethic might as well mean that its bias perfectly matches ours.
Solution 1: Recognizing and Acknowledging the Bias
The first step towards addressing a problem is recognizing it. Understanding our "original sin," the biases and systems that existed before us and that we unknowingly adopt, is crucial. We need to accept that these biases exist and work actively to counteract them. Only by acknowledging our own shortcomings can we start to build a more inclusive digital landscape.
Pressure Point 2: The Learning Mechanism of Chat GPT
Chat GPT, like other AI platforms, doesn't start from scratch. It synthesizes vast amounts of data to produce responses. Rev James Lee likened its learning process to that of a child: a child listens, records, and remembers speech patterns, volumes, and emotions before forming words. These words, once positively reinforced, become a part of the child's vocabulary. Similarly, Chat GPT forms connections based on the data it's exposed to. But herein lies the problem: once these connections, especially biased ones, are established, they're challenging to undo. An example is when Chat GPT was asked to depict Jesus with the poor, and the imagery displayed biases in racial representation.
Solution 2: Actively Diversifying the Learning Data
To counteract such biases, we need a flood of diverse languages, cultures, and voices to reshape the AI's learning process. We require a digital "Pentecost moment" where varied inputs create a deluge of new, diverse learning. Entrepreneurs from diverse backgrounds, like Sabrina Short and platforms like NOLAvateBlack.com, should play a pivotal role in this process, ensuring a broader perspective in AI training.
Pressure Point 3: The Risk of Perpetuating Today's Biases into Tomorrow
The issue isn't just about recognizing the biases in AI today but also the potential of these biases being passed on to future generations. If not checked, today's AI might be tomorrow's educator, further solidifying these biases in the minds of the next generation.
Solution 3: Curating Our Phygital Tethics
Now is the time to shape our technological ethics in both the physical and digital space. By actively working on our "Phygital Tethics" (Technological ethics in a Physical/Digital world), we can shift the exponential learning of generative AI. This proactive approach ensures that we aren't passing on today's biases to tomorrow.
In conclusion, as we lean into the Augmented Age, it's essential to be aware of the biases ingrained in our systems. By recognizing them, diversifying our learning data, and curating our technological ethics, we can build a more inclusive and unbiased future