You need to enable JavaScript to run this website.

What kind of person would an AI be?

Noggy

Made by Gemini

As we strive for AGI, what kind of intelligence do we want?

The fun part of being human (till now) is that we can all be unique in who we are. Sartre spoke about human consciousness and how our freedom to choose who we can be is what makes us uniquely human.

Of course, there were folks who opposed this, most famously his own companion, Simone de Beauvoir, who talks about how the freedom to be oneself is always nuanced and as marginalized and groups who are disadvantaged know very well, it is a long road to freedom. There have of course other theories that have opposed that but this post is not about Sartre.

Rather, it came to be after a post I read by Mustafa Suleyman. The gist of that is that AI should be built for people not to replace them. As the clamour for Artificial General Intelligence increases, it does pose a question, what kind of intelligence are we talking about. If AI were a person, what person would that be?

After all, if AI is learning from us, it will be built with all our follies in place. Remember the human race has had Ivan the Terrible, Adolf Hitler, Nelson Mandela, Mother Teresa and Mahatma Gandhi. That is quite a spectrum to have and no one person is good or evil, they are all shades of grey! So this AI would have quite a shade card to play with.

In a lot of surveys, especially in democratic countries or at least countries where there is a semblance of democracy people are normally eager to have an authoritarian in place so that things move smoothly. China and Singapore come to mind. Of course, folks actually living in places ruled by dictators would have a far different thought process, although that too would be nuanced by whether they are the ones who are doing well in that atmosphere or are the subjugated.

Anyway, the point being, if there was an authoritarian AI in place, the first thing they would (ideally) do is destroy the cause of nuisance in this world - humans! If there was a more benevolent AI in place, would they be more open to overlook a few foibles. This of course, presumes that AI would be able to decide its own course and not be programmed all the time (after all isn't that the point of AGI?)

Going back to the essence of being human, freedom of choice (I will ignore the point about marginalized groups) and assume AGI would have choice, what choice would that AI make? Would that be in the interest of the human race, or of itself?

This was not meant to find an answer, it was something that provoked a thought in my head!