This is a bonus submit, following from yesterday’s, through which I interviewed the bogus intelligence often called ChatGPT about meditation.
This time I gave the AI the immediate, “Write a Buddhist sutta, in the style of the Pali canon, in which the Buddha tells Ananda to stop playing Candy Crush on his iPhone.”
Here’s what ChatGPT got here up with.
Once, the Blessed One was dwelling at Sāvatthī in Jeta’s Grove, Anāthapiṇḍika’s Park. And at the moment, the Venerable Ananda was hooked on enjoying Candy Crush on his iPhone.
The Blessed One approached Ananda and mentioned, “Ananda, what are you doing?”
Ananda replied, “I am playing Candy Crush, Lord.”
The Buddha spoke: “Ananda, this is not the path to liberation. The playing of such games only leads to craving and attachment. To achieve true peace and wisdom, one must let go of such distractions and engage in virtuous and meaningful actions.”
Ananda listened to the Buddha’s phrases and put down his iPhone, vowing to let go of his attachment to Candy Crush. He turned his consideration to the Buddha’s teachings and shortly realized the trail to enlightenment.
And so it’s mentioned:
“Just as a bird is freed from its cage
when the door is opened, so too one is freed
from the bonds of ignorance and craving
by practicing the path taught by the Buddha.”
Appropriately, the picture above was generated by one other AI (Craiyon: previously DALL-E Mini), utilizing the immediate, “book illustration of the Buddha holding an iPhone.”
Coming again right down to earth, ChatGPT’s predecessor was vulnerable to utilizing vile racist and misogynistic language, as a result of its database of data was drawn from the net, and the net is stuffed with that type of stuff. For the newest model of the AI, a program needed to be developed that might acknowledge and display screen out the poisonous materials. In order to try this, Kenyan staff had been paid as little as $1.32 an hour (by way of a subcontractor of OpenAI, which owns ChatGPT) to learn and categorize the textual content. Workers had been traumatized by studying, for instance, graphic descriptions of bestiality and pedophile intercourse.
The contract between OpenAI and the subcontractor ended after an article was revealed exhibiting that Facebook had employed the subcontractor to display screen content material. In this case, Kenyans had been required to view photographs and movies of executions, rape and baby abuse for as little as $1.50 per hour. This is traumatizing work.
AI and social media are enjoyable, however there’s a darkish and exploitative facet to it as properly.