I’ve been interacting with OpenSim bots — or NPCs — for nearly as long as I’ve been covering OpenSim. Which is about 15 years. (Oh my God, has it really been that long?)
I’ve been hoping that OpenSim writing would become my day job, but, unfortunately, OpenSim never really took off. Instead, I covered cybersecurity and, more recently, generative AI.
But then I saw some reporting about new studies about AI, and immediately thought — this could really be something in OpenSim.
The study was published this past April in the journal Neuroscience of Consciousness, and it showed that a majority of people – 67% to be exact – attribute some degree of consciousness to ChatGPT. And the more people use these AI systems, the more likely they are to see them as conscious entities.
Then, in May, another study showed that 54% of people, after a conversation with ChatGPT, thought it was a real person.
Now, I’m not saying that OpenSim grid owners should run out and install a bunch of bots on their grids that pretend to be real people, in order to lure in more users. That would be dumb, expensive, a waste of resources, possibly illegal and definitely unethical.
But if users knew that these bots were powered by AI and understood that they’re not real people, they might still enjoy interacting with them and develop attachments to them — just like we get attached to brands, or cartoon animals, or characters in a novel. Or, yes, virtual girlfriends or boyfriends.
In the video below, you can see OpenAI’s recent GPT-4o presentation. Yup, the one where ChatGPT sounds suspiciously like Scarlett Johansson in “Her.” I’ve set it to start at the point in the video where they’re talking to her.
I can see why ScarJo got upset — and why that particular voice is no longer available as an option.
Now, as I write this, the voice chatbot they’re demonstrating isn’t widely available yet. But the text version is — and its the text interface that’s most common in OpenSim anyway.
GPT-4o does cost money. It costs money to send it a question and to get a response. A million tokens worth of questions — or 750,000 words — costs $5, and a million token’s worth of response costs $15.
A page of text is roughly 250 words, so a million tokens is about 3,000 pages. So, for $20, you can get a lot of back-and-forth. But there are also cheaper platforms.
Anthropic’s Claude costs a bit less — $3 for a million input tokens, and $15 for a million output tokens.
But there are also free, open-source platforms that you run on your own servers with comparable performance levels. For example, on the LMSYS Chatbot Arena Leaderboard, OpenAI’s GPT-4o in in first place with a score of 1287, Claude 3.5 Sonnet is close behind with 1272, and the (mostly) open source Llama 3 from Meta is not too far distant, with a score of 1207 — and there are several other open source AI platforms at the top of the charts, including Google’s Gemma, NVIDIA’s Nemotron, Cohere’s Command R+, Alibaba’s Qwen2, and Mistral.
I can easily see an OpenSim hosting provider adding an AI service to their package deals.
And then there’s the potential for interactive storytelling and games, with quests and narratives that are more engaging than ever before, create virtual assistants that feel like true companions, or even build communities that blur the lines between AI and human participants.
For those using OpenSim for work, there are also applications here for business and education, in the form of AI tutors, AI executive assistants, AI sales agents, and more.
However, as much as I’m thrilled by these possibilities, I can’t help but feel a twinge of concern.
As the study authors point out, there are some risks to AIs that feel real.
First, there’s the risk of emotional attachment. If users start to view AI entities as conscious beings, they might form deep, potentially unhealthy bonds with these virtual characters. This could lead to a range of issues, from social isolation in the real world to emotional distress if these AI entities are altered or removed.
We’re already seeing that, with people feeling real distress when their virtual girlfriends are turned off.
Then there’s the question of blurred reality. As the line between AI and human interactions becomes less clear, users might struggle to distinguish between the two.
Personally, I’m not too concerned about this one. We
Source link