Artificial Intelligence versus Emotional Intelligence (AI vs EI)
AI is expressed logical relations that are invariant with time or context that machines can be programmed for. Can be deployed in situations where there is a common body of scientific knowledge on the subject of relevance, that define those relations such as: finance, mathematics, statistics, physical sciences, economics ….
EI on the other hand is intuitive, undefined / unbounded, not expressed, not transferable, not standardized over time and context, varies from person to person and even from one context to another even within an individual. One cannot define time invariant rules for behavioral outcomes that are driven by emotions: for instance love / passion, anger, violence, respect, hurt, and even for decisions that are driven by multidimensional contextual considerations that may not be amenable for defined logical relations.
While individual subject matter analysis and decisions (for instance budgeting, taxation, sales futuristic projections on turnover, profit, income …. ) can be subjected to AI since there are defined rules and formulae for each that play on basic data and also application of growth rates for the future, business decisions on where to invest, product pricing, HR decisions / policy, production management .. are multidimensional in nature and crossing individual subject matter boundaries where cross-functional considerations, compromises and negotiations matter in decision making. Such scenarios are not amenable to AI in their entirety but AI can be used in a limited sense to dig into potential invisible associations from simulation of past data, with limited accuracy!!!. But past data has limitations in arriving at outcomes when human behavior is involved. For instance, how will one use AI for a business decision on investing in fashion, life style products, which themselves are driven by host of factors such as individual tastes, spending power, awareness, demographic behavior, peer comparisons driving decisions … that are neither quantifiable nor can be subjected to a sustained logical relationships.
All of us have experienced the AI playing when we call a helpline number and know how frustrated we are, because we don’t get answers to questions that only a human to human interaction / communication can convey effectively and get the response expected. But companies still use it because they save on cost or pass on the cost to the customer (caller) by wasting our time and leaving us frustrated with our issue not being resolved or they do it due to FOMO (fear of missing out) from the crowd or lost opportunities temporarily
AI also tend to influence our thinking and response because repeated exposure to same stimulus (noise, information, visuals, …) can make us believe that, that is the way to be in this changing world and either we change our behavior or we fall into depression (I know psychiatric cases shared by a practicing psychiatrist on the profile of customers and cases he is made to deal with … thanks to influence of technology on our psyche) – not in sync with peer groups, comparisons, feeling less than others, developing inferiority complex …
We should desist the practice of deploying AI in situations where AI is not the most appropriate strategy; but businesses use it either due to pressure to be with the crowd not to be left out or for cost saving considerations at someone’s cost
Many times misplaced use of AI can lead to disastrous consequences (Google search for chest pain that tells u its is a heart attack and ask you to go for open heart surgery, but when u consult an experienced medical professional who uses the touch and feel as well as a detailed discussion with you on potential associated symptoms, it may be diagnosed as something that needs no intervention and will settle by itself or may need minimal intervention … the real problem may be something else, but AI takes decisions on stated symptoms …
Now a days AI is misused in blindly arriving at advisory on investment decisions by stock market analysts (may or may not be with vested interests), simply based on tracking share price movement over a period of time and making superficial judgments as if price will only move in one direction …. But discerning investors know when and how far to take these advices … and when to dump them
Similarly in handling emotional / psychiatric cases one cannot follow a rule or use statistical analysis to arrive at the cause, and advise on right interventions (including just behavioral changes where only a one to one personal interaction will work. Such personal interactions will address a variety of factors of potential connected factors without a one to one logical or mathematical cause effect relationship but only a package of behavioral changes including self introspection will work. Solution to many problems are to be evolved in a holistic sense that may not have a mathematical relations or even a one to one logical reasoning, but a package of interventions can help (whose internal logical reasoning relationship and quantification are either not known or don’t exist or not yet evolved)
Application of AI or even management theories or principles on a one to basis in real world, I feel is far from ideal. In real world situations, one has to look at a situation holistically and arrive at an optimum (not ideal) negotiated solution based on best judgment and collective acceptance
AI is driven by a mathematically logical framework based on predefined relationships that are time and contextual invariant, but EI is driven by an inclusive loosely defined and non-exclusive multivariate relationships that make the input relationships open ended subject to interpretation and context variant.
AI used in performance evaluation, individual assessment, competency discovery and assessment, judgment on individual behavior based on isolated incidents will do more harm than good, as it can categorise a gem as a trash … driven by mechanistic approach on incident assessment
I have had multiple personal incidents where my contextual behavior was in direct conflict with ‘defined norms’ of conduct and in each one of these situations the outcomes were surprisingly positive
In a project review meeting to resolve client issues, I, as project leader inadvertently happened to sit on the client side of the table, my team / colleagues reminded me I was sitting on the wrong side of the table. I unwittingly and spontaneously retorted to my team ‘I am speaking for the client’, that set the entire audience into laughter and the client loudly announced the meeting is over …. No issues. This is contrarian to how we are trained to behave and what AI will take as input for advising what one should do and how one should con duct oneself in different contexts
In another incident, where I was in a final deal negotiation meeting with a defense client ( 30 participants on client side plus the chairman vs two from our side), when the client team commented our manpower estimate was high, I asked them how did they arrive at their lower estimate. When there was no answer for 2 minutes from the 30 member client team, the chairman announced the meeting is over and the deal is ours …. My colleague a retd major general got a shock of his life and at the end of the meeting he burst out laughing at the way the proceedings progressed and its outcome
Another incident when I told the client our company was not interested in the project as we were changing our focus, he asked me if I can do it in my individual capacity, and unbelievably despite my lukewarm interest the client gave me the job in my individual capacity with a bonus payment at the end …
Speaking straight is generally considered as undiplomatic in business deals and can lead to loss of business. I have proved repeatedly that this perception is wrong… what the client needs is those who speak straight and be honest … and many more…..
Most of these incidents lead me to believe that if AI were to be used based on prebuilt logic on the right and wrong of ones conduct as taught in management programs, I would have lost my job, but reality is totally contrary.
I have to believe that AI has role to play in scenarios where logics are time and situation invariant, and there is a situation independent material (not human) relationships among the variables. In real world in real organizational scenarios EI that is inclusive, that is situation / context sensitive, recognizes undefinable scenarios, open to accept greyness (not black and white) will prevail over AI. What is probably happening now is the initial frenzy on something that is novel and business’ overenthusiasm to capitalize on it and the fear of FOMO
Real world situational assessment and decision making are not mechanistic driven by formulae. These involve holistic considerations on multiple dimensions that vary from person to person and also driven by the individuals considerations and confidence to effectively handle outcomes from a decision approach. Hence these are neither programmable across individuals or even homogeneous groups or even the same person across contexts and time
AI is primarily driven by machine learnings from past data and assumptions that past is a reflection of the present and future. AI only knows the decision outcomes in the past in a context but not its considerations, and it uses this learning to drive decisions in the present and the future. Even if these considerations or logic were modified for the present context based on changes in the known variables and their values, these are based on assumptions on linkages among considerations and outcomes and stretching to the present and the past, without really knowing the dynamic interplay across these considerations, their cross interlay and outcomes.
I would say commercial considerations are overplayed and misused to create hype, FOMO, reduce costs, manipulating ones’ psyche and create mass following to celebrate AI and there by benefit those who are in the tech industry as service providers, create mass hysteria among the youth in particular to follow the crowd, derive financial benefit from manipulated decisions by the spending / emotionally subjugated population, create an artificial environment.
Natural human decision making is driven by multiplicity of considerations that vary across individuals, age and gender groups, social and economic classes, exposure / education and experience, situational compulsions / considerations, consumption desire and individual choices / preferences / priorities / sheer personal likes and dislikes / demographic, historical, social, ethnic, religious / spiritual compulsions / learnings and on and on. There is no right or wrong in ones choices except when one indulges in something legally inappropriate in a geography. There is nothing to celebrate about AI driving FOMO and behavior compulsions among the emotionally susceptible population. AI driven by its ability for wider and faster reach, manipulated opinionating creating mass following anonymously. It’s actually robbing the individual of his / her freedom of choice.
AI is however a tool for data mining, examining and exploring associations (not reasoned relationships) as reasoning can only done by human mind not by machines. Machines can only respond to or act on data and logic inputs at a pace much faster than human beings (brain?).
AI, I believe is a major impediment for genuine unbounded unhindered innovation by virgin human brain