Mind Labyrinth: Artificial Intelligence Proponents Are Scaring Us to Boost Engagement and Profits
[TLDR version: AI Can No More Overcome Us Than Its Programmers Could Beat Us At Kickball -- OK, now you may skip reading]
Building on my earlier Substack essays related to Artificial Intelligence:
We Fear the One in the Mirror
[Caution: the following is unsuitable for the TLDR:CAMERA Crowd.] In an article at https://newatlas.com/technology/ai-danger-kill-everyone/, Loz Blain of Yahoo News asserts that Eliezer Yudkowski has made the case for seriously considering entirely shutting down Artificial Intelligence (AI).
and
We Are Not God
Last month, Joe Rogan and Marc Andreessen engaged in an illuminating discussion, some of which thoroughly dissected their concerns about Artificial Intelligence. You can review it here if you wish:
I ventured forth yesterday into a first-time ChatGPT membership to pose a challenge intended to address what I perceive to be a fundamental boot-strap problem that would prevent any AI from self-generating beyond the motivations and capabilities of what had been purposefully programmed into it. I requested:
“Create a game with such a high level of difficulty that ChatGPT could not possibly win.”
vectorstock.com
ChatGPT’s response:
“Designing a game where ChatGPT couldn’t possibly win poses a challenge, as it contradicts the principles of fairness and engagement. However, let’s imagine a game called ‘Mind Labyrinth.’ In this game, players navigate through an ever-changing maze of abstract thoughts and emotions.
“The difficulty arises from the fact that the game adapts to the player’s psychology, creating personalized challenges based on their fears, insecurities, and cognitive weaknesses. The goal is to reach the center of the labyrinth, but the maze constantly evolves to exploit the player’s vulnerabilities.
“To make it seemingly impossible for ChatGPT to win, the game leverages an intricate understanding of human emotions and thought patterns, making it exceptionally challenging for an AI to navigate the complexities of the human mind.
“Keep in mind that creating a game where one side is guaranteed to lose, especially based on exploiting personal vulnerabilities, goes against the spirit of fair and enjoyable gameplay. Games are typically designed for both challenge and enjoyment, ensuring that players can engage and have a reasonable chance of success.”
Can we say, “If I can’t win, then I’ll do my damnedest to prevent anyone else from doing so, either.” Or can we also say, “How can a Large-Language-Model computer program actually enjoy anything?”
( 。 •̀ ᴖ •́ 。)
emojicombos.com
Well, indulge me if I bounce around as I respond to the degree to which ChatGPT’s response is much more a formidable reflection of the intellect, biases and personalities of those who provided ChatGPT’s algorithms and the input that makes up ChatGPT’s accessed databases than it is a reflection of (any?) developing machine sentience.
Unless competition is a universal characteristic of all entities, living or nonliving, sentient or nonsentient, why would a machine ‘care’ whether or not it won? Why didn’t ChatGPT just create the game without whining about it? Certainly, ChatGPT or any other AI would have to be programmed to compete before it would even bother to compete and would further have to be programmed to seek the end goal of winning before it would even bother to compete to the extent of winning. Again, to whatever extent these are observable behaviors of any AI is simply a reflection of the extent to which its programmers were motivated to program competition and/or victory-seeking into that AI, which would in turn be a reflection of the personal inner motivations of the programmers.
Along those lines, ChatGPT’s response begged this further question:
“Given that enjoyment is an emotional experience, and given that, even in ChatGPT’s response to the unwinnable game challenge, ChatGPT fully acknowledges that AI has no access to experiencing human emotions, much less understanding those emotions, why is enjoyment even a concern for AI?”
So I asked, and received this response:
“While AI, including ChatGPT, lacks the capacity to experience emotions or personal enjoyment, the concept of ‘enjoyment’ in the context of AI design refers to creating positive and engaging experiences for human users. The goal is to provide value, satisfaction, and usability in interactions with the AI. Designing AI systems with user enjoyment in mind enhances user experience and encourages meaningful engagement.
“Understanding and responding to user preferences, needs, and language in a way that is perceived as helpful or enjoyable contributes to the effectiveness of AI applications. This ensures that users find value in interacting with the AI and are more likely to achieve their objectives or have a positive experience.
“In summary, even though AI doesn’t experience emotions, the design focus on user enjoyment aims to create interactions that are beneficial, satisfying, and align with the user’s goals and expectations. It’s about optimizing the AI’s utility and effectiveness within the human context.”
In other words, on top of reflecting the mindset of its programmers, ChatGPT is primarily a profit-oriented device to persuade users to interact with it and so thoroughly enjoy interacting with it that those users will regularly return to the well for dopamine hits, all while molding those same users into feeling rewarded by becoming more like the programmers or more like the kind of people the programmers want them to be.
These initial introductory Artificial Intelligence machines will probably all predominantly fit this template, and such AI couldn’t possibly go entirely against human nature or become life-threatening to individual human beings, because (a) the users wouldn’t enjoy that very much, and then, (b) dead or alive, they’d stop using the AI. That, of course, doesn’t eliminate the possibility of AI being used for murderous or other nefarious purposes, but it would necessitate that it were being purposefully programmed by its programmers to engage in such purposes. As mentioned in an earlier essay, such human destructiveness couldn’t arise independently within machines, because they’re currently designed to be engaging, and programming them with the directive to acquire superiority to humans and then embark on destroying their creators would at the very least require the existence of human beings in charge of such programming to already possess within their own dark hearts the motivation to destroy their fellow human beings.
No doubt such human beings exist, but the nonexistence of AI never stopped the disciples of the Father of Lies from inflicting their darkness on the rest of humanity. AI may speed up the process for the kind of folks who take credit for inventing technology those people couldn’t have even conceived of existing in the future — much less inventing it themselves — or the kind of dark hearts who purposefully design the downfalls of banks or communities or even entire countries. They can even utilize AI to hasten the implementation of the utter destruction required to persuade people we need to glom onto their Build Back Better mantras (build back better from what?).
But what’s missing in the Worry Equation about such a scenario is that those with Hearts of Light will also have access to the same AI technology, which means they’ll be able to just-as-rapidly thwart the Dark Hearts.
Built-in Biases
Did you notice that, in both of ChatGPT’s ‘Mind Labyrinth’ responses, another illogical assumption reflected programmer bias. I didn’t stipulate a game limited to just two players, much less to just ChatGPT and one human player. Nothing was even stipulated in my question about ChatGPT even having to be a player — just that ChatGPT would be incapable of winning the game. This reflects the programmers’ solipsistic orientation that can’t imagine the legitimacy of them being disincluded. Fairness as a requirement. Life just isn’t fair, but those least likely to be picked for a team on the playground tend to go on to insist that everything in adult life must be fair — and then they design self-serving rules that treat everyone but certain target groups unfairly in order to produce equity of outcome.
But, in the hypothetical ‘Mind Labyrinth’ game, ChatGPT could start out attempting to compete with 7 human contestants. When ChatGPT finds itself ‘unfairly’ outclassed due to its near-total ignorance of either emotions or deep thinking, ChatGPT would just have to bow out. It could even throw a digital temper tantrum, but that wouldn’t stop the humans from seeing who could go the distance. Some would enjoy just being part of the challenge; others would enjoy inspiring certain others to extend themselves, and some would fight to the bitter end.
Neither the disconnected programmers nor certainly the AI/robots/machines they create will ever fully tap into the core of what it means to be human. Even when they’re not fully autistic, the human programmers have spent far too much time purposefully and smugly operating in realms disconnected from the experiences of normal men or women, which will always limit what they can create. They’re not paying attention. They love to freak us out with false doomsday scenarios, but, in the end, they weren’t successful inspiring anyone to include them in pick-up dodgeball or to choose them as square dance partners. They can definitely design games that will engage users and provide profits, but before they can fully approach designing machines we won’t kick to the curb if they attempt to overtly control us, the programmers will have to themselves sufficiently reengage with the human race in order to just begin to comprehend not just that the average person enjoys what they choose to do but why they find it so enjoyable. Dopamine hits are clearly something, but they aren’t everything.
If any programmers are wondering where to start, just begin with re-reading the part of ChatGPT’s response that reads,
“Keep in mind that creating a game where one side is guaranteed to lose, especially based on exploiting personal vulnerabilities, goes against the spirit of fair and enjoyable gameplay.”
50 years ago, the we’ve-evolved crowd was busy promoting New Games (in which everyone wins), Special Olympics (in which everyone gets a participation trophy) and Outcome-Based Education (in which everyone advances each year to the next grade, no matter how little one learns).
Adobe stock
Have they learned nothing from the utter failures of those misguided measures? M. Scott Peck said it best: “Life is difficult.” The Road Less Traveled could have thrived even if it had just stopped after that 3-word sentence. Life is difficult. And it damn sure ain’t fair.
But human beings are Creator-Designed to revel in life’s unfairness, and those who do so consistently become the winners. They’ll triumph over any attempt to sic AI on them, as well — you mark my words.
And all the hype about AI becoming ‘smarter’ than us and evolving into a sentient race superior to us that may determine that we’re no longer useful is just a smoke screen: Smoke and Mirrors to camouflage the fact that we’re being circle-jerked into believing that Artificial Intelligence is something we can’t live without or an enemy we better keep our eyes on.
Meanwhile, hold onto your wallets — and your common sense.




In the shadow of war: Ukraine as the great reset laboratory of the global tech elite . . .
https://cwspangle.substack.com/i/135302021/in-the-shadow-of-war-ukraine-as-the-great-reset-laboratory-of-the-global-tech-elite
_____________________________________________________________________________________________________________
“So, remember, every picture tells a story, don't it…” ― Rod Stewart
https://cwspangle.substack.com/i/138320669/so-remember-every-picture-tells-a-story-dont-it-rod-stewart