“I would declare victory if in my professional lifetime we could make machines that are as intelligent as a rat.”
— Yann LeCun, Facebook’s chief AI scientist, quoted in the WSJ’s review of “SAM,” Jonathan Waldman’s book about the quest to build a bricklaying robot.
According to a recent survey by PwC, PriceWaterhouseCoopers, of the 2500 US consumers and business leaders contacted, 72% believe that AI will fundamentally change the future and that it was a “business advantage” to have AI technology. They also found that people would be happy to give up their privacy and security if this technology would improve their lives and their communities.
Predictive analytics, which is a primary application of AI, requires a massive amount of relevant and pertinent data and if that data is continuously changing many of the functions that it is programmed for fail and cease to be useful.
For many businesses, AI has taken on some mystical power that would transform your business, solve all your problems, reduce all your costs, and turn your operation into a moneymaking monster. Many are now finding that AI isn’t advancing as quickly as predicted and that it is not the magic bullet promised.
Don’t get me wrong, AI is certainly not going away by any stretch of the imagination. But what has happened, especially during this pandemic, is that systems that use and learn from giant pools of data are not as capable as AI cheerleaders suggested.
Several high-profile businesses have recently shut down or cut back on their AI research efforts. Companies are now in a period where they are working to justify their application of AI in their businesses.
What we’re learning is that the hyped and magical AI, the kind that companies tell you, will solve all of your business problems.
Yes, AI technology is useful for some pretty basic stuff today. Voice-activated assistants, voice transcription, security biometrics, tracking your behavior on browsers and social media using that data to present you with content it determines you’d like, and so forth. The demand for AI is still not as much the imperative it was in the past several years.
Of course, among the Big Five, AI technology is seen as a core business. Companies like Google and Apple have incredibly deep pockets and continued to add AI engineers to their ranks during the pandemic.
A just-released survey of nearly 1,400 AI professionals, conducted by O’Reilly Media, found the two most significant roadblocks to the use of AI in businesses are leaders who don’t appreciate its value and the difficulty of finding business problems in these firms for which AI might be useful.
AI’s primary method of solving problems, deep-learning algorithms, is good at identifying people and animals in photos and beating humans at the strategy game Go. They require enormous quantities of data to train, and they tend to fail when that data changes.
The pandemic has laid bare the shortfalls in the AI systems used today. Again, those systems are working well, identifying human voices, photos, and such. But more data-driven programs such as supply chain and predicting the behavior of shoppers have fallen apart during the pandemic. Even in the best of times, most businesses just do not have enough data to train AI systems to be helpful.
Companies are now observing and reviewing AI architecture that was developed using data before pandemic because their predictions could no longer be reliable.
Lockdowns, social-distancing rules, unemployment rates, and supply-chain disruptions worldwide have led to changes in customer behavior and new economic trends. These massive changes have caused current AI models to be less than reliable due to “model drift.”
Drift is actually a common issue that occurs when the new data doesn’t look like the historical data the program was initially trained on. These models are now out of alignment with the information that’s current, such as employment and retail data. These data have changed dramatically during the pandemic.
Most of the AI models currently in use were trained on data collected years before the pandemic. And industries affected by the model drift include healthcare and employment, among many others.
The pandemic has created new model conditions that were previously non- existent, and the changes in human behavior, as a result, will cause model drift for quite some time.
AI is still an incredibly long way from delivering what many people dream it will do for humanity. And at this moment, it is becoming clearer what it can and cannot achieve.
A lot of what companies promising as artificial intelligence is really nothing more than programming. The ability to automate repetitive tasks and skills testing and analysis software has been around for a while now. Chatbots are cumbersome and time consuming for healthcare candidates in either trying to find a job or information regarding employment. They simply scan your available website data and provide answers. In a recent trial we performed using several job chatbots, it took, on average, 15 questions and 7 minutes to complete the process before we were presented with a requisition.
A healthcare professional, either nurse or allied health, knows precisely what job to apply for. They don’t need to answer a series of profile questions. They just want to see the requisitions you have available. Quickly.
A highly detailed search overlay application and well-developed content on a branded career site can deliver faster and more informative results. Communicating with candidates during and after the application process can easily be managed without AI systems.
There are some significant challenges for AI technology in recruiting. For one, as we discussed above, AI requires an extraordinary amount of reliable data. Candidates’ resumes and profiles change from one social media platform to another. Today’s candidates often tweak their resumes to match keywords from your requisitions to increase the likelihood that their resume will be plucked out of the ether by your system. As we said earlier, people’s unpredictable nature makes a significant impact on the reliability of data collected. And marketed AI systems that work to retrieve data from social media platforms can end up with a plethora of different candidate profiles for the same candidate.
Then there’s the issue of bias in programming. Merely removing a candidate’s age, gender, and race will not remove the potential for bias in other data present in the application and resume.
We’re a long way off from a perfect system. For the moment, it’s another of those bright shiny objects, that silver bullet of a promise to solve all your recruitment needs.
Leave a Reply