The Ethical Lexicon #11: Keeping it real in the era of ChatGPT
Do you like fiction? Most of us do.
But it’s a strange indulgence. We consume stories that we know aren’t true, easily and willingly suspending our disbelief.
And yet, we feel misled and betrayed when purportedly true accounts prove less than truthful.
And that is precisely the problem with ChatGPT. What has the potential to be an extraordinarily useful AI tool can simultaneously be employed as a weapon of mass deception.
We don’t mind technology doing our jobs, but we want to be able to clearly recognize the line between it and us. Otherwise, we feel insecure in our jobs, in our relationships, and in our lives.
How far will new technology go? How long until the computers that are helping me do my job start doing it for me and making me obsolete?
That insecurity is both caused by and increases a lack of trust. And the surest remedy to ensure and maintain trust is transparency.
That’s why ethical leaders have an obligation to:
Communicate what innovations are being considered and tested.
Articulate a (genuine) commitment to safeguarding job security throughout the organization.
Ensure that employees will be prepared to adapt to coming changes.
We can’t put the genie back in the bottle. But we can keep it from masquerading as one of us.
Please click to read more about preserving our grip on reality in the virtual world in this week’s installment of the Ethical Lexicon in Fast Company.