Why Financial Planners Have Nothing to Fear from AI

John Robinson |

Why Financial Planners Have Nothing to Fear from AI


Now that the bloom is off the crypto rose, artificial intelligence – “AI” for short & trendy – is, to borrow from the title of Michael Lewis best-selling 1999 book about the dotcom era, “The New New Thing.” Like every “new thing” before it, AI’s potential applications are tantalizing, limitless, and potentially frightening. They are also likely overblown.



Artificial intelligence isn’t necessarily new.  Computer programs have been beating the best chess players on the planet for decades.  It has, however, quickly become a presence in our every day lives.  For instance, I recently had to make a rather complicated change to my return flight plans on a multi-stop trip from Honolulu to New England. Instead of being forced upon a disaffected, underpaid human, the forward-thinking airline put me in touch with an extraordinarily articulate and genuinely interested chatbot named Giselle. Giselle was quickly and easily able to assess my conundrum and proceeded to book me on a one-way trip to San Francisco via Dallas.

I replied that this was lovely, but I was actually trying to get home to Honolulu.  She politely referred me to a colleague and fellow chat bot at the airline named Raul. He could not have been more gracious, and was not the least bit flustered when my normally professional tone regrettably began to regress in frustration over his apparent inability to grasp my dilemma. He apologized to me for his shortcomings and escalated my complaint to Anderson, who reminded me a great deal of the age-old story of the Vermont farmer, who, when asked by for driving directions by a lost young couple, replied in a thick New England accent, “You can’t get there from here.”  Despite bringing me to verge of profanity, Anderson eventually calmly passed me on to a real human being whose generic name I can’t recall.  After 2 hours of chatbot Hell, this real person was able to figure out a flight route that would get me home to Hawaii.  Although the change cost me a couple hundred bucks extra, he lowered my blood pressure for free.

As this example clearly illustrates, the value of replacing human call centers non-paid, unflappable chatbots should be obvious to all.  But, alas, not airlines had the foresight to early adopters.  Another airline I regularly use only recently elected to embrace the future and appears to be experiencing a few minor hiccups.  When I called the usual toll-free number for Customer Support, I received the following message:

“We have recently upgraded our customer support phone system.  Callers may expect longer than normal hold times.”

Hmmm…. is it me, or does that not seem like an upgrade?  Fortunately, after a mere 22 minutes on hold, I was connected to a delightful chat bot who accidentally hung up on me.  At that point, my profanity fell on deaf ears, although, to be fair, Alexa probably caught (and remembered) every word.

Along a similar vein, last week The Wall Street Journal ran a fascinating and informative piece on how AI is invading the online food preparation and nutritionist space.  Here’s an excerpt

When she asked OpenAI’s ChatGPT for recipes based on an upcoming grocery delivery, it gave her several enticing ideas. It also suggested a smoothie bowl of silken tofu, lentils, lemon juice and olives—topped with chocolate strawberry Cheerios “for crunch.”



With all of the recent uproar surrounding the use of AI-generated original music content that sounds exactly like real royalty-loving artists and the possible displacement of real live rich and famous actors by equally charismatic and good-looking AI-generated personas, it is logical to think HAL (the sinister bot from Stanley Kubrick’s Sci-Fi  classic)  “2001 a Space Odyssey” might soon be coming for far more mundane professions, including financial planners. 

Indeed, there is some analogous precedent in financial services.  For instance, Intuit’s Turbo-Tax has largely eliminated the need for professional help for taxpayers with simple returns.  However, as much as Big Blue made short work of the game of chess, Congress has succeeded in making the IRS Code sufficiently complex that most businesses and individuals who itemize need to pay a small fortune to a CPA to avoid paying an even larger one to the state and federal tax coffers. In deference to my professional human cousins in the tax planning field, I am a better-than-average chess player, but I confess to being confounded by the Alternative Minimum Tax.

Readers of my content may recall that I recently took Chat GPT for a test drive to see if it might be able to save me time in writing articles for the Financial Planning Hawaii Blog and newsletter.  I explained how I asked the app to write an article explaining the six most significant new tax rules introduced by SECURE Act 2.0 for individual taxpayers.  To my great surprise and disappointment, the app failed miserably, though I gave it an A+ for its creativity in dreaming up fictional tax rules.  I believe the industry term is “hallucinations.”  ChatGPT also fell woefully short in replicating my biting sarcasm and dry rapier wit.

Still, it would be foolish not to consider the possibility that AI might evolve from its nascent state.  In would not be the first time that the financial advice profession has faced a serious threat from technology.  In the 1990’s, the Internet ushered in the rise of online trading platforms to compete with traditional brokerage firms.  This disruption inspired folks like me to adapt and extend our guidance beyond pure portfolio management and to incorporate comprehensive financial planning into our business models. 

Ironically, after the dotcom bubble burst and  most of the ranks of the day-trading screen jockeys who fueled the early financial success of the online-brokers traded themselves back to their original day jobs, the remaining players in the space, including our dear friends TD Ameritrade and Charles Schwab pivoted to embrace independent financial planners and their larger and decidedly more predictable revenue streams.  In a twist of poetic irony, the online trading platforms that set out to replace financial planners like me ended up advancing and elevating our careers.  In doing so, the firms did manage to seriously disrupt the traditional big wirehouse brokerage firms – a development which has not caused me to shed a single tear.

More recently, so-called robo-advisors burst onto the scene a little more than a decade ago boasting of sophisticated tax-loss harvesting software and complex academically supported index fund and ETF portfolio models paired with asset-based fees far lower than that of most traditional investment advisers and financial planners. Today, however, none of the major players in the robo-advisor space has managed to scale to profitability. Although they succeeded in attracting billions of dollars in assets, it has been spread across hundreds of thousands of consumer accounts, most of which have balances less than ten thousand dollars.  The revenue from these assets has proven insufficient to keep pace with the advertising spending needed to keep the robo-advisor hype-train rolling.  What ultimately sunk the roboadvisor business model, however, is the platform’s inability to meet consumers’ desire to have unique portfolios and their need to speak with real financial professionals about their individual problems and concerns.  The latter is very much analogous to my experience in asking a specialized question to the airline’s chat bot.



Ultimately, it is more than just a desire for human contact that makes the financial planner indispensable to consumers.  It is far more tangible than that.  In my opinion, the fundamental reason why financial planners have nothing to fear from AI is its inability to replicate critical thinking. And by critical thinking, I do not mean the kind of complex thought required to play a game of chess or compose a piece of music. Instead, I am referring to the kind of distinctly human form of thought that is required to think beyond the requirements of the assigned task.  In some cases, such thought may be loosely categorized as “common sense.” Although, in fairness to the algorithms, this category of thinking is sometimes lacking in their human counterparts as well.

Here are three examples from my recent professional experience that illustrate this point more clearly.

  1. A client, “Jane” (not her real name) recently inherited her brother’s estate.  He had worked worked as an IT specialist at a national health maintenance organization and sadly passed away shortly before he planned to retire.  Since Jane’s brother had also been a financial planning client, I had copies of all his beneficiary forms and knew that he had named his sister as the beneficiary on his retirement accounts including his pension. However, when Jane called the HR firm, she was informed that she was not entitled to the pension because the decedent was not her spouse.  When she told me I informed Jane that the person with whom she spoke gave her misinformation and that she should call and speak with a different HR rep.   Jane followed my advice, but got the same response.    I then sent Jane an excerpt from the company’s employee benefit’s handbook that explicitly stated that non-spouse beneficiaries are entitled to pension benefits when the decedent dies prior to retirement.  When Jane called a third time with the supporting document, a supervisor apologized and confirmed that Jane was entitled to the benefit, which had a lump sum value of approximately $300,000.


  1. A client, “Ben”, inherited a 403(b) plan from his brother “Jerry” that was administered by Fidelity. Ben asked me for advice on how to receive the benefit.  The account value was a little over $700,000.  I advised that he should established a beneficiary IRA to receive the proceeds and that current IRS rules will require Ben to deplete the account within 10 years of the date of his brother’s passing.  However, when Ben called Fidelity, the rep advised that Fidelity does not do that and that the proceeds must be paid as a lump sum, which would create a windfall in tax revenue for the IRS.  I then had Ben call Fidelity again, this time with me on the line. Armed with both a Fidelity publication espousing the benefits of inherited IRAs and the internal revenue code, the rep who originally spoke with Jerry stammered a bit and asked to place us on hold while she checked with her supervisor.  A few minutes later, she came back and advised us that Fidelity had made a “coding error” on Jerry’s 403(b) and that the proceeds could indeed be transferred into an inherited IRA.


  1. A longtime client “Sandra” from Hawaii moved to the mainland to be closer to her children following the death of her husband. In April, I received a panicked call from Sandra informing me that she needed $180,000 to pay her tax bill.  The culprit was the capital gain she realized on the sale of her house, and it took her entirely by surprise.  Obviously property values are much higher today than they were when Sandra and Roy purchased the house in the 1980s. However, I guessed that the CPA had failed to account for the 50% step-up in basis upon Roy’s death. Additionally, since I had worked with Sandara and Roy for more than 20 years, I knew that they had also undertaken two major renovations within the past ten years. I reminded Sandra of these improvements and asked her to check with her CPA to see if these had been incorporated in the tax consequences.  Her CPA had neither stepped up the basis nor was she unaware of the renovations. She asked Sandra for documentation of the renovations, which she fortunately had kept.  Her final tax bill was just $18,000.



These are just three examples from my everyday experience.  The monetary value of these examples is obviously very real and tangible. This guidance is not easily replicated by artificial intelligence because AI lacks the ability to think critically.  It can scan the metaverse for data, but will not ever be able to figure out how to tell when the client service rep with whom you just spoke is apathetic and lying through his teeth.  Just as it cannot replicate my sardonic humor, it also cannot consider issues that you did not think to ask.


John H. Robinson is the owner/founder of Financial Planning Hawaii and Fee-Only Planning Hawaii. He is also a co-founder of fintech software-maker Nest Egg Guru.


Related Articles:

I pitted ChatGPT against a real financial advisor to help me save for retirement—and the winner is clear (Fortune)

A Financial Planner Takes ChatGPT for a Test Drive (Advisor Perspectives)

What happens when you let a chatbot plan your meals (Wall Street Journal)