The Original Sin of Chat GPT or The Birth of Phygital Tethics

In the era of rapidly evolving technology, the foundations upon which we build our digital landscapes are riddled with biases, often mirroring our own unseen prejudices. As we delve deeper into the world of artificial intelligence, specifically platforms like Chat GPT, it’s imperative to address three core challenges: the inherent biases present in technology, the learning mechanisms that further solidify these biases, and the looming threat of perpetuating these biases into future generations. Let’s explore these pressure points and offer solutions to pave the way for a more inclusive and diverse digital future.

Pressure Point 1: The Inherent Bias in Technology

It is impossible to be unbiased. Even as we dedicate ourselves to being open to a variety of voices, the very language we use, English in this case, carries an implicit worldview that is neither universal nor objective. As we transition from the Information Age into an Augmented Age, where automation and intuitive design are at the forefront, we must be conscious of the foundation upon which we build. While it’s easy to believe that technology lacks ethics, in reality, its biases often mirror our own, making them invisible to us. Thus, saying that technology has no ethic might as well mean that its bias perfectly matches ours.

Solution 1: Recognizing and Acknowledging the Bias

The first step towards addressing a problem is recognizing it. Understanding our “original sin,” the biases and systems that existed before us and that we unknowingly adopt, is crucial. We need to accept that these biases exist and work actively to counteract them. Only by acknowledging our own shortcomings can we start to build a more inclusive digital landscape.

Pressure Point 2: The Learning Mechanism of Chat GPT

Chat GPT, like other AI platforms, doesn’t start from scratch. It synthesizes vast amounts of data to produce responses. Rev James Lee likened its learning process to that of a child: a child listens, records, and remembers speech patterns, volumes, and emotions before forming words. These words, once positively reinforced, become a part of the child’s vocabulary. Similarly, Chat GPT forms connections based on the data it’s exposed to. But herein lies the problem: once these connections, especially biased ones, are established, they’re challenging to undo. An example is when Chat GPT was asked to depict Jesus with the poor, and the imagery displayed biases in racial representation.

Solution 2: Actively Diversifying the Learning Data

To counteract such biases, we need a flood of diverse languages, cultures, and voices to reshape the AI’s learning process. We require a digital “Pentecost moment” where varied inputs create a deluge of new, diverse learning. Entrepreneurs from diverse backgrounds, like Sabrina Short and platforms like NOLAvateBlack.com, should play a pivotal role in this process, ensuring a broader perspective in AI training.

Pressure Point 3: The Risk of Perpetuating Today’s Biases into Tomorrow

The issue isn’t just about recognizing the biases in AI today but also the potential of these biases being passed on to future generations. If not checked, today’s AI might be tomorrow’s educator, further solidifying these biases in the minds of the next generation.

Solution 3: Curating Our Phygital Tethics

Now is the time to shape our technological ethics in both the physical and digital space. By actively working on our “Phygital Tethics” (Technological ethics in a Physical/Digital world), we can shift the exponential learning of generative AI. This proactive approach ensures that we aren’t passing on today’s biases to tomorrow.

In conclusion, as we lean into the Augmented Age, it’s essential to be aware of the biases ingrained in our systems. By recognizing them, diversifying our learning data, and curating our technological ethics, we can build a more inclusive and unbiased future.

By the way…the text was edited in Chat GPT, and all graphics were prompted.