Do the Ethical Dangers of AI Outweigh the Benefits?
Utilitarian philosophy might not hold the answer
How do we define good?
Utilitarian philosophy suggests that we evaluate the absolute benefit against the absolute cost.
But what if the immediate cost is low but the long term costs are incalculable? Is short term convenience enough of a reason to gamble with our future?
Common sense demands that we anticipate outcomes, and that we exercise the self-discipline by resisting indulgences, even when the potential for harm is in doubt.
Here's a delicious milkshake, and there's only a 1 percent chance that it's laced with cyanide. Bottoms up?
Somewhere between unbridled enthusiasm for change and the calcified mindset of the Luddites resides an elusive place called wisdom.
We stop seeking it at our peril.