turing test theme and quandary: if the AI is robust enough to fool a human in conversation would that also mean the AI would need to be able to fool itself into believing that it is actually human? or would it need to be a really good liar? imagine our governments and judicial systems around the world should even more skilled liars run the show…
also, don’t delete meÂ
You need to login in order to like this post: click here