A popular AI chatbot was caught lying on automated sales calls, telling users that it’s human. It is one thing that this isn’t ethical behavior, but the the more concerning element here is the fact that there was no guardrail built into the AI script that would prevent it from lying. Why would this not progress into lying about more serious issues, like committing fraud?