There Will Always Be Lawyers
In an effort to increase skills in its artificial intelligence capabilities, Facebook developed a project to teach their chatbots to negotiate. It didn’t go as planned. The task required two chatbots to negotiate a division of items between the two of them. The items were assigned varying values for each bot, representing how much they cared about each thing, but the only way for them to figure out what the other bot valued was through language, as humans do: If you ask about an item, you must care about it. While it was impossible for both bots to end up with the best deal, they were rewarded for completing the negotiation; unlike between humans, they couldn’t just walk away.
Although these robo-negotiators learned to pretend interest in a valueless item in order to later concede it – something humans do all the time, and an important negotiating tactic – Facebook shut the experiment down when the bots invented their own shorthand language to haggle with each other, and not straightforward English. AI experiments routinely run into language problems.
Not that humans don’t glitch too. When it comes to data analysis and pattern recognition, for instance in contracts, AI is already taking on the grunt work for negotiators of all stripe. JP Morgan replaced 360,000 legal hours with software that analyzes loan documents. This kind of processing is a robot’s field of dreams. But contingency thinking is a different game altogether.
So, apparently, is accuracy. Just as in every other industry now, the hype for automated legal solutions is deafening. However, a federal judge in Manhattan is considering sanctions against a lawyer – a 30 year veteran of the New York bar – after he filed a brief in his case against an airline that was full of legal precedents to bolster his argument. The trouble was: those cases don’t exist. He used OpenAI’s ChatGPT to research his case, and the chatbot made up multiple precedents, complete with citations.
This type of confident hallucination makes ChatGPT unreliable and, in this case, could cost an actual lawyer a hefty fee and reputational damage.
While the New York judge mulls sanctions, a federal judge in Texas issued a rule that any briefs filed before his court must note if generative artificial intelligence contributed to the work product, and, if so, that the facts cited were then verified by a human. You can bet those will be billable hours. While tech-watchers promise ever-more efficiency and less work for junior lawyers (and fewer junior lawyers), the truth is, at least for now, a human lawyer will have to review AI-generated contracts and briefs because AI doesn’t always dot the i’s and cross the t’s, despite having passed the bar exam.
Despite sensational victories over chess and Go masters, hardware can’t yet replace brainware when it comes to responding quickly and adjusting to surprises outside the parameters of its programming. As Alibaba founder Jack Ma says, humans are better than AI when it comes to doing things that take heart.
And negotiations take heart. Emotions are an important factor in negotiating. Because AI can easily miss microexpressions that convey crucial nuance, it will be a while before robots have a seat at the table. Negotiation takes psychological skill and emotional intelligence. Empathy helps all parties get to “yes.” According to hostage negotiator Chris Voss, demonstrating that you understand the other party’s perspective and showing respect are the most important aspects of getting what you want from a negotiation. Ultimately, negotiations hinge not just on what you say, but how you say it. And, as Facebook found in its experiment (and continues to find out), AI is still working out its language issues.
Negotiations aren’t just about price, they’re about values. A good haggler is flexible, and able to throw in curveballs to shake up talks, making moves often based on instinct. The basics of negotiation are still very much in the human domain.
AI is just not there yet, although that might change in the future. Researchers Noam Brown and Tuomas Sandholm designed a program that learns from human mistakes, so it can get better at predicting them. Their program, Pluribus, plays Texas Hold ‘Em poker with no limits against five other players. Like the Facebook project, the point is to better understand the dynamics of multi-party negotiations.
Designers taught Pluribus the strategies of top experts, but the program invented some tricks of its own, including unexpectedly large bets early in games. In other words, it learned to bluff. AI is continuously improving, but for the foreseeable future, the best deals are made through relationship-building, diplomacy and being able to read the room.
Read more about negotiation tactics:
Read more about AI in the workplace: