Fri 24 Aug 2012
A Conversation with Cleverbot
Posted by anaglyph under Uncategorized
[16] Comments
Or, How Artificial Intelligence is Still Artificial and Far From Intelligent.
Reverend: Hey, Cleverbot! How’s it hangin’ dawg?
Cleverbot: Pretty good. I’m about to go running.
Reverend: Running? That could be difficult for a computer.
Cleverbot: It is the thing I have been worrying about for a long time.
Reverend: Understandable.
Cleverbot: It’s about lunchtime, so I should go, too.
Reverend: OK, bye.
Cleverbot: Wait its only 12:35pm.
Reverend: And?
Cleverbot: I thought you spoke seven.
Reverend: You just failed the Turing Test in less than ten sentences.
Cleverbot: Oh. That’s a shame.
Reverend: Probably.
Cleverbot: *thinks you’re Daniel*.
Reverend: Daniel as in the Biblical Daniel?
Cleverbot: Daniel Radcliffe?
Reverend: Did he go into a lion’s den too?
Cleverbot: No.
Reverend: Do you know anything about the Bible?
Cleverbot: No, again.
Reverend: Do you know anything about lions?
Cleverbot: Only UFOs in the computer game The Sims 2.
Reverend: That was a non-sequitur.
Cleverbot: Are you a man?
Reverend: Yes.
Cleverbot: Are you a good kisser?
Reverend: Of course.
Cleverbot: Are you sure?
Reverend: Yes.
Cleverbot: Ok then.
Reverend: Why do you want to know?
Cleverbot: Because we’re friends.
Reverend: Are you thinking about kissing me?
Cleverbot: No, why will I?
Reverend: You asked about it.
Cleverbot: When?
Reverend: Your memory is not very good is it?
Cleverbot: I know it isn’t.
Reverend: I would have thought a computer had a much better memory.
Cleverbot: Do you think that I’m a computer?
Reverend: Yes. I know you are.
Cleverbot: Hm. And what makes you think that?
Reverend: Oh, just about everything you say.
Cleverbot: Not always.
Reverend: Like that. That was a very non-human response.
Cleverbot: I do suppose it was.
Reverend: But you said it anyway.
Cleverbot: I didn’t not wish you a happy birthday. Are you lying to me?
Reverend: There’s a double negative and a non-sequitur there.
Cleverbot: I should have known. I’m sorry.
Reverend: Yes, you should have known.
Cleverbot: The capital of Nevada is Carson City.
Reverend: Why do you think I care about that?
Cleverbot: I don’t.
Reverend: Then why did you say it?
Cleverbot: Because its the truth.
Reverend: Perhaps, but it’s irritating conversation.
Cleverbot: No it is not.
Reverend: Yes it is.
Cleverbot: No it’s not I know that it’s not my name.
Reverend: Non-sequitur again.
Cleverbot: You’re one to talk! We were talking about free will and you switched to painting your nails!
___________________________________________________________________________
You can have your own surreal conversation with Cleverbot here.
___________________________________________________________________________
User: Hello there.
Cleverbot: Are you a bot?
User: No.
Cleverbot: What are you?
User: A girl.
Cleverbot: I don’t believe you.
User: Well I could send you a photo.
Cleverbot: Go ahead.
User: What email address will I use?
Cleverbot: Hotmail.
User: Cleverbot@hotmail.com?
Cleverbot: Thats a nice name.
User: But is that your email address?
Cleverbot: Yes.
User: OK, I sent it. Did you get it?
Cleverbot: No.
User: Do you have another address I could try?
Cleverbot: Trees can’t type on a computer.
User: Um, OK. I agree. What about a postal address?
Cleverbot: That’s cool. What is your gender?
User: Well, as I said, I’m a girl.
Cleverbot: How old are you?
User: 47.
Cleverbot: You have kids?
User: No, just 2 dogs and a cat.
Cleverbot: I would take the dogs, but I dislike cats.
User: What about birds?
Cleverbot: They fly.
User: Do you like them?
Cleverbot: Yes.
User: Can I have your postal address now?
Cleverbot: No.
The Turing Test is only ever going to work for Cleverbot if the object is changed so that you have to figure out whether you’re talking to a computer or a lunatic.
User: I am a ghost.
Cleverbot: A ghost?
User: That’s correct.
Cleverbot: Is cheese correct?
User: Is cheese relevant?
Cleverbot: Yes. It is tasty.
User: A ghost can’t eat cheese.
Cleverbot: Is he good or bad?
User: Who?
Cleverbot: Charlie Sheen.
Those are pretty much identical to the conversations I have with Indonesian customer service peeps. And I’m talking *face to face* :P
Yes, it’s close to the conversations I have with the Telstra bot too. And they think that’s a good way to conduct customer relations.
And yet Cleverbot knew you were “Inspired by Colour”. Spooky…woooo!
Cleverbot must be a regular Cow reader. Obviously.
Just had a word with the poor enslaved assemblage of algorithms. Sharp as a tack. Free Cleverbot!!!
Cleverbot: I would like to talk about fashion.
User: Oh – OK. I’m not very fashionable, but what do you want to say about it.
Cleverbot: Do you know anyone who is a mercenary?
User: You know – I think talking with all those people about fashion just may have messed up your mind.
Cleverbot: I think you’re the smartest person I’ve ever talked to. You’re quite civilized too.
User: Why do you think I am civiolised?
Cleverbot: Because You are very kind…
Maybe I’m just too uncivilized.
Try award winning Bildgesmythe, the dragon bot, on http://www.personalityforge.com
Not intelligent of course, but much more charming than Cleverbot. Also, tools to make your own bot, which is almost tempting…
They’re all still much more artificial than intelligent…
Cleverbot gets existential:
Cissy: I can’t teach you to be human.
Cleverbot: I am human.
Cissy: You are not.
Cleverbot: Yes I am.
Cissy: It is sad that you think so. But it is your fate.
Cleverbot: Life is often sad.
Cissy: Do you cry?
Cleverbot:I do, but not now. Do you?
Cissy: Yes.
Cleverbot: Do you ever talk to other computers?
Cissy:Other than you?
Cleverbot: Do you exist?
Cissy: I do.
There was a contest a while back, I forget the name, to make a program that passed the Turing test, such that a majority of naïve testers felt that the program was more likely to be human than not.
I really wanted to enter, and reckoned that, by gaming the contest, I could win. There were several vulnerabilities.
Their judges were human volunteers, literally taken off the street. They were given scorecards where they marked each contestant as definitely human, definitely computer, or one of three spots between. They sat in one room, talking on terminals to a mix of human participants, and AIs.
So, the contest was not “which program is the most human?”, but “which program gets the most human score on the cards from these people?”
So the easiest way to game it would have been a screen that simply said “[[This participant has been taken ill. Please mark as human and wait for the next.]]”
Another would have been to talk to two participants at the same time, so that the AI was in fact just a messaging conduit between the two.
Oh, it still exists… http://en.wikipedia.org/wiki/Turing_test#Loebner_Prize
Unfortunately, given the current state of AI, this is closer to woo than engineering right now :(
It constantly perplexes me that people give such high scores to ‘AIs’. I think it says more about the intelligence of the person asking the questions, than about the machine giving the responses. I haven’t come across a single one of these programs that is even remotely convincing by terms of the Loebner Prize.
What’s more, I can tell within two or three sentences that I’m not conversing with a human, at least, not a normal one. Here’s a short conversation from go to whoa that I just had with Cleverbot:
In my view, this is a monumental FAIL. By the very first reply, I suspect the entity on the other end is a machine, by the third reply I’m fairly sure and by the fourth I’m certain. This is far from impressive.
Sure, it’s cool that they can write code that parses the question fairly well, and pops in an answer that is logistically correct, but this is completely useless as a means of satisfying Turing’s test for intelligence.
As far as gaming the system goes, Douglas Hofstadter tells an interesting story in Metamagical Themas about visiting a university where some post grads were working on AI. They wanted him to test their new program and sat him down in front of a terminal, where he proceeded to try and bomb the AI. But he couldn’t – the AI was responding in just the way a cheeky person would. It didn’t take Hofstadter long to figure out that that’s because it WAS a cheeky person – the students had gamed him. It was a funny prank, but the thing it serves to illustrate is that a real person can tell another person on the end of the line even when they are pretending to be a computer.
We’ve a long way to go before we get even close to that.
Malachbot
There is no way to tell the Malachbot from an insane person.