manfred feiger

Who owns the truth? – Part one: AI Companions

AI companions, ChatGPT

Published: November 2, 2023
Reading time < 10 minutes
Categories: | | |
2023-11-02T13:56:55+00:00

Thoughts on AI Companions after around one year after AI took over mass market.

The concept of "owning the truth" is complex and often subjective, as truth can be influenced by personal beliefs, societal norms, and cultural context. Some may argue that truth is a collective ownership, shared by all who seek knowledge and understanding. Others might contend that truth is individually owned, shaped by personal experiences and interpretations. Ultimately, the ownership of truth may lie somewhere in between these perspectives, bridging the gap between collective understanding and individual experience.

Speaking about Artificial Intelligence in the context of AI, it’s also quite tricky to speak about the truth and where our current development might lead us. Undoubtable AI is a great wingman, buddy or companion if you want to call it that way. My Notion AI assistant wrote the paragraph about the concept of “owning the truth” for me. For me, not being a native speaker, a handy function and a core concept everyone would agree makes our world better.

Companions or buddies? Language is related to many surrounding influences.

Speaking about AI companions or buddies starts with the separation of both, as it’s a matter of language. And there again we have the issue about our personal perspective to those two words; it’s a matter of personal taste, context and maybe social background.

AI Models are being trained on language and often enough they fail with their use of language. So many stereotypes around gender topics are still present. One more famous issue is that ChatGPT wasn’t able to distinguish between male and female nurses in the English language and thought nurses were female (of course depending on the way the question was asked).

Language itself isn’t the easiest topic, so while some languages have male and female articles, others don’t. Objects with female or male articles are more likely to be associated with different characteristics, based on the language (such as in German a bridge is female, while in roman languages a bridge is male); if you ask people how they like a certain bridge, it is most likely that in German the bridge will be more female, such as nice curves, … and in Roman languages the same bridge might have a solid basement and other attributes that are more related to male clichés.

Luckily this is another topic. But you see, language isn’t easy, so AI can’t do magic on its own.

And there’s also the issue of the data being trained on. There’s a mixed-media installation by Mimi Ọnụọha, being called the library of missing datasets from 2016, that brings the issue to the point.

The artwork describes itself as:

"Missing data sets" are the blank spots that exist in spaces that are otherwise data-saturated. Wherever large amounts of data are collected, there are often empty spaces where no data live. The word "missing" is inherently normative. It implies both a lack and an ought: something does not exist, but it should. That which should be somewhere is not in its expected place; an established system is disrupted by distinct absence. That which we ignore reveals more than what we give our attention to. It’s in these things that we find cultural and colloquial hints of what is deemed important. Spots that we've left blank reveal our hidden social biases and indifferences.

The question is. If AI is trained on data from our society and contents being gathered on the net. It is supposed to have missing data sets as well. Furthermore, it might rely on bad or prejudiced content. In the case of data about Africa, Nigeria, local, alternative movements help to overcome the data gap: archivi.ng

So if we look at AI Companions as our first element of AI Developments… would you like your children to talk to biased bots? Sure everyone would say no… but maybe only because I asked like this.

I don’t own any truth. I am a collector and searcher myself. I don’t have answers. My series is a collection of personal thoughts. Hopefully I get a better answer for myself after expressing my thoughts.

My first part. The world of AI companions.

Introducing the latest tamagotchis: character.ai and others

For me, the Tamagotchi was a symbol for the dot.com boom as I worked on a digital version of it back in 2001/2002, being called studigotchi (so it was meant for students).

The Tamagotchi gained immense popularity in Japan around 1997 and quickly captured the hearts of people worldwide in the subsequent years. Imagine the ingenious concept of having your very own pet, all neatly encapsulated within a compact egg-shaped device. There was a hunger meter and a lot of other ways to grab the users’ attention to care for that tiny little device.

At that time, the internet itself was on the rise and chats and other ways of companions were far, far away. The concepts of dark user patterns, such as the “hook canvas” from Nir Eyal were still not available, but still it was obvious that people like artificial companions. To not get bored, to not feel alone and to feel related to the little artificial device. Of course, it was “sooo cute”.

Now we are facing the push of AI technology and of course also AI Companions emerge. Now armed with the incredible abilities of AI and a deep understanding of user behavior manipulation, we are ready to unveil the next groundbreaking tool designed to generate extraordinary profits.

AI Companions example: character.ai
character.ai website screenshot
AI Companion example: caryn.ai
caryn.ai website screenshot

Et voilà there it is, the first Killer app for AI: AI companions.

While there are different concepts for virtual services to provide a companion, the basic idea is always the same: get a virtual friend.

I assume a great business model and blueprint for most celebrities could be the service of influencer Caryn Marjorie. You can have her as a virtual girlfriend through the caryn.ai service.

Character.ai is the most popular service for now (as far as I know from a western perspective) to find an online AI-buddy. Replika would be another one, also more famous.

character.ai is a platform that uses a neural language model to read and understand text and use this information to chat. You can create any character on this site. So you could create fictional characters, your celebrity, other real people, your dead beloved ones and so on.

Though the concept of a virtual companion is not new, character.ai made it available for the mass market. TechCrunch reported in September 2023, that AI app character.ai is catching up to ChatGPT in the US. Looking at the numbers, you see how impressive the stats are. The amount of users is gigantic and the potential use cases, that are directly covered on the homepage, seem very interesting.

Image Credits: Similarweb, taken from "AI app Character.ai is catching up to ChatGPT in the US" on techgrunch.

So you could practice something, such as the preparation for an interview or a language. You could brainstorm an idea, plan a trip, write a story or other similar things you could also do with ChatGPT. The difference is… ChatGPT's interface is empty. Getting back to the dot-com world, there also used to be this emptiness when not knowing what to do on the internet.

So giving these little hints is quite useful. I experimented with the language feature shown in the attached screenshot. In this case I thought it sounded much more promising than the actual chatting is. It’s difficult to learn a language when you need to write it all the time… so you see in my responses, I didn’t write long sentences.

Sample dialog on learning Italian in character.ai
Sample dialog talking with Albert Einstein in character.ai

What you could also do is chatting to famous people, so I spoke to Albert Einstein, and it feels quite nice to speak to these virtual characters. If my son (6 years), being in love with ancient cultures such as Egypt, Greece or Roman culture, would love such a tool to chat to Ramesses or some other old characters from history.

Luckily there’s a little text marker saying that “everything characters say is made up”… what a pity; a combination of a learned model based on scientific know how about the character and some additional fiction would be much more appealing. But maybe this is just another business model in the context of AI Companions; especially if we take in the fact that ChatGPT's announcement of “ChatGPT can now see, hear, and speak” opens up new possibilities for great companions… so what about a great Bruce Lee Companion analyzing my body movements and giving feedback or some other master of a subject being able to assist me.

I guess even the researched wisdom applied to a character from history could lead to interesting applications, such as a real wingman in research. Why not brainstorm with a character being trained on all the know how about himself to test new hypothesis.

Watch on YouTube.
Watch the video on "AI-generated characters for supporting personalized learning and well-being" on Youtube.

As for most successful concepts, AI Companions aren’t new. And as for more Mass Market successes there is way older groundwork being done in science before. One that comes to my mind is the AI-generated character for Learning and Wellbeing example of the MIT Media Lab.

Use cases graphics from MIT Media lab. Credits Pat Pataranutaporn

On the one hand this example is still far more advanced than character.ai, but also shares positive outlooks, of where the journey might lead.

Are AI companions good or harmful?

Currently, we see how much interest AI companions gain in a short time. These virtual companions can be addictive and so character.ai is a rare exception in the spent time on the site. In the following chart you see the so-called Daily Active Users (DAU) to Monthly Active Users (MAU) ratio that measures how sticky the product is.

Despite other AI First Apps, character.ai seems to be very sticky, see the numbers from Sequoia. They share a quite interesting article from an investor's point of view, but the numbers being shown also point to one issue most AI topics have: their usability is… let’s say bad. Still, character.ai seems to create some OK stickiness.

Statistics from Sequoia

Usability is a big issue in AI. Just imagine employing a prompt engineer to get a good prompt for image generation… not a sign of good usability of the available tools. The focus of the development is currently: quick business use cases and the speed at which money is thrown out of the window reminds me a lot to my time working at the dot-com economy bubble (though I started, when it blew).

The influence of AI Friendship Apps on users’ well-being and addiction

The headline is related to a study being called “One is the loneliest number… Two can be as bad as one. The influence of AI Friendship Apps on users' well-being and addiction”.

In their study they looked at the influence of friendship apps, in this case mainly replika and summarized the reoccurring themes.

  • AI friends help people feel less alone as they do not feel judged by it.
  • AI friends are always there for me and kind.
  • AI friends tell users what they want to hear.

As previously mentioned, these applications are incredibly addictive. However, the advantages for therapeutic purposes are quite evident. Engaging in conversation about certain topics can be extremely beneficial and contribute to one's overall mental well-being.

Woebot – the mental health ally, is one of the apps that is more targeted as a medical support for users with depressions or anxiety and can deliver cognitive-behavioural therapy to reduce those.

Screenshot of Woebot Health Website

So are there any final words to say? Positive or negative on AI?

No. At least I don’t want to judge AI in general. I am none of the people out there signing the techno-optimist manifesto. But I share some thoughts on the role of technology in our society in general.

While the technology itself is not my primary concern, I prioritize the responsible and thoughtful use of it. If I provide a service that allows individuals to improve their mental well-being, I must acknowledge the potential impact of negative experiences with someone they trust. Many people prioritize money and quick ideas instead of recognizing the true potential of technology.

AI Companion has tremendous potential that is worth delving into. However, further improvements are needed in refining the models themselves. Additionally, as previously mentioned, usability remains a pressing concern across all AI solutions.

And as many AI or future thinkers themselves don’t trust AI or want more momentum to secure it, it is worth to look at the issue from different angles.

Further Resources

Leave a Reply

Your email address will not be published. Required fields are marked *

More Posts