The major technology breakthroughs that will solve many of the world’s problems have already been made. Artificial Intelligence can easily outsmart stock markets and deliver spectacular profits. Your first reaction to these comments is that they cannot be true, or the world would already look like a very different place.
They are true thanks to the revolutionary advances in technology recently and a few creative individuals who have genuinely worked out how to use the technology available. You just have not seen them due to other people. Technology has changed but people have not.
The hold up is caused by human nature – people with responsibility not being able to process information or adapt to change. These are sweeping statements, so let me explain.
People do adapt, but not as fast as technology has recently. To illustrate, let us consider a basic older technology that we are familiar with and can interact with effectively – a simple calculator.
Now if you asked a human to do a complex maths equation quickly and the gave you an answer, you would rightly apply human judgement to trusting if the answer was correct or not.
If the person giving the answer had just won the Nobel prize for mathematics, we would be inclined to believe them, if it was a five year old child, we would be rightly sceptical. But even with the cheapest calculator we know it can calculate numbers better and faster than the smartest human. It is indeed faster than a million of the smartest humans working simultaneously if that was even possible.
We take the calculator’s answer without emotion and just use the number without questioning it. As humans we have over some decades adapted to using this simple technology and interact with these devices efficiently.
We would not need to get to know the calculator socially over a few lunches to decide if we liked it enough to accept its answers. You may laugh at the silly image of having lunch with a calculator, but most people do the equivalent of that mistake with technology every day – the consequences for their careers are unfortunately not that funny.
When the power of the simple calculator is multiplied by millions of times, we start to struggle with it as humans. Yesterday in the office I was looking for some accounting numbers and asked what I thought was a simple question of a data scientist colleague. The data was in the corporate accounting system, he assured me, so he would quickly email what I was looking for.
On opening the email, I was instantly suspicious to see a dropbox link containing zip files and sure enough I had been sent an excel of several hundred thousand lines of data. The computer had not been asked a specific enough question, so I had no chance of comprehending the answer. The correct information in fact was there, but as a human it was useless to me. An organic brain simply cannot physically process that volume of information.
Whilst computers beat us in terms of raw memory and processing power, us humans are way more complicated in both good and bad ways. We as humans all know the good bits about being alive and sentient so that needs little explanation here. But what is relevant is how our evolutionary psychology limits us for now in being able to deal with the technological opportunities around us.
It is something scarily new, but for those who can get their head around it, riches await literally. We are already seeing the emergence of a super-rich class who have mastered technology transformation, whilst the remainder worry about their jobs being automated.
Now regarding these new super rich – our first human reaction is that it is impossible for you and me to aspire to becoming that. Our psychology and historical knowledge tell us that the elites can only get there by controlling thousands of workers and by leveraging the influence of their powerful friends.
How on earth could we step out of the front door of our house tomorrow morning and persuade thousands of people to follow us so we could become rich and powerful? Where would we get the millions of dollars that would get us started? We know that is next to impossible. But what if we could wake up tomorrow and get the equivalent of those thousands of people for free? We all can do that every day, we just don’t do it through confusion, fear and human nature.
It is not just our fault as individuals. Our companies, social circles and society all reinforce the old historical way of thinking. The structures around us give us perverse negative incentives and a false sense of security. Our fears are calmed and our aspirations reassured by those around us being equally wrong.
Markets operate inefficiently, opportunities are missed, new technologies wait decades to be adopted all because “that is the way it is”. Call it politics, inertia, psychology, silliness, whatever – if only there was a way to cut through it, we could be rich and famous, surely?
Well these are very broad concepts, so let’s put this into the context of a specific story so it is easier for us humans to understand. Whenever I contribute articles to publications, I get lectures from editors about having a human interest story to make a concept relatable to readers.
The editors are entirely right, and this backs up the same theme throughout this article – the information and technology are all out there, but needs communicating in a way that can be processed by us humans. So to illustrate the theme without further ado, here is a human story, dear reader.
Excuse me in advance for going all gonzo and telling it in a rambling way, as it would be a shame to miss exploring the many side avenues along the way. This fascinating topic has many a complex angle, as with anything that involves us humans interacting with each other.
My colleague Paul is what many people would call a tech nerd. His idea of a fun Sunday relaxing is playing around with beta versions of cloud artificial intelligences being launched by the biggest global tech firms. They invite people to test them and he is one of a handful of people who do that.
He sits in front of a massive screen in an apartment where everything is automated by Alexa and surrounded by stuffed Minions toys, then explores the limitations of the most powerful computers in the world. The Minions would be about as much use as other people assisting in his endeavours, as he is already harnessing the power of millions of people for free in the form of the artificial intelligence he is working with. The brute processing power is all there, the AI just needs asking the correct question, then one must have the ability to understand the answer.
Last year one of the largest AIs in the world owned by Google allowed public access for the first time through its TensorFlow interface. Now this would lead to some really “fun” Sundays for Paul building AI neural networks running parallel algorithms to hopefully achieve automated machine learning.
For the AI to get its figurative teeth into, a really massive publicly accessible dataset wan needed, so the cryptocurrency markets were chosen as they are designed to be accessed online and are therefore user friendly to connect to computers. The stuffed Minions agreed this was a good idea, so the game was on.
Massive online datasets were loaded up into TensorFlow and the objective was for the AI to predict future cryptocurrency prices. One Monday I asked Paul how his playing was going and he casually mentioned he had contacted Google to correct their core AI programming twice.
Curious, I asked him how he had spotted that when nobody else had. Sure enough he had been attempting something that obviously nobody else had tried before. This really intrigued me – how had Paul and his Minions progressed further with AI than the giant technology and banking behemoths? Surely he could not have single-handedly beaten teams of thousands of programmers with research budgets of billions of dollars?
Well the simple answer is yes he could. He continued pushing the Google AI as far as it was capable and after a few months found its limit. He ran the datasets at strange hours of the day when he hoped Google would not notice the ridiculous amounts of processing power being taken up on their beta testing platform.
Fortunately, the answer came before they noticed and started asking for payment for using that much processing power. The answer was that it is not possible yet for an AI to accurately predict future financial markets movements based on past patterns. There is no identifiable pattern and the AI comes up with “random walk” nonsensical answers which is its equivalent of saying it does not know.
Now let us step away from the perspective of Paul and his Minions and consider the imagined position of his “competitors” at the big banks. If we look at the published results of AI “quant funds” run by the largest banks, they are broadly similar to that of the traditional human trader funds. So it would appear that they have not “cracked it” either despite all of their efforts.
A suspicious mind may maintain that they have cracked it and that there is a conspiracy where an alien lizard race are using the astronomical profits generated from successful AI trading to control humanity for sweaty and nefarious purposes. Whist everything is theoretically possible, let us just take the simpler explanation at face value – the big banks have not cracked it.
So what is going on with these famous banks spending billions of dollars on AI research and not getting anywhere? People tend to assume technology advance is about computing power, whilst it is in fact limited by the humans using those computers. One has to assume that they reached the same “random walk” barrier when building their AI neural networks.
Now let us put ourselves in their position – are they going to blame the computer or themselves for not thinking of an alternative? Of course to them it is the computer’s fault and the answer is a bigger AI. It is like in my household I jokingly blame the cat for any mistakes, as it is politer than blaming a family member and even better the cat cannot argue back.
Let us consider this situation in purely relatable human behaviour terms. Let us put ourselves in the shoes of the job of head of quant AI research at a major bank. You are being paid a million dollars a year, but you see that is justified given the investment you made in becoming so well qualified as a statistics professor. Psychologically you have to reassure yourself that the decades dedicated to staring at equations was the right life choice.
Your entire sense of self worth is tied to others respecting your intellectual superiority. You have been given a global support team of thousands of specialists and a budget of billions of dollars which totally reinforces your sense of self-worth. Even more vitally, the secure job and high salary allow you and those you love to have safe, comfortable lives. You would do anything to ensure that you kept these physical and psychological necessities.
That “anything” in practice would be to look busy and blame the computer for any lack of progress. You are incentivised by your bank to do exactly that, so it is entirely rational to behave that way. Your instructions are to lead a large team and spend a large budget and that is precisely what you are doing.
The consequences of following these given instructions is that you become very important and are paid a lot of money. It is not entirely selfish either – from an ethical standpoint your actions are also providing livelihoods for those you care about. With your salary your children can go to the best universities and with the large organisational structure so can many of your colleagues’ children. In a way your acquiescence with the status quo is a responsibility to your many colleagues, who depend on you for their jobs. So you are personally successful, rich, respected and the generous benefactor to thousands all by taking a very conservative low risk path in life – what is not to like about all of that?
As not to pick on just one department of that big bank, let us cast a humorously critical gaze to a wider cast of characters, who would otherwise feel unjustly ignored for their “contribution”. Let us imagine all of the types of people you get at a big bank and how they would interact.
We have combined the aforementioned statistics professor, some drugged up traders selling things, some computer geeks fresh out of college, a Kafkaesque political management bureaucracy who know they could have been automated an age ago, squads of poor outsourced workers in the middle of nowhere with no idea what is going on, plus out of touch executives who think they are masters of the universe. What could possibly go wrong?
It is hilarious to conceive that this dysfunctional bunch could produce genuine results, but one does not have to overly worry on their behalf. The good news is this is all merrily financed by freshly printed money, so everybody lives happily ever after. What type of mean busybody spoilsport would we be to ruin all of that fun? And before investors and shareholders splutter that this setup is unfairly not in their interests, they are equally to blame for adding illogical incentives such as headcount to revenue targets, that directs technology towards mindless cost cutting automation, rather than genuine innovation or competitive customer service.
Variety is the spice of life as they say, so whilst such a mismatched group of people is a predictable commercial mess, it is not all bad. By chance I attended one of the legendary “geek and model” parties here in Vietnam. Some technology skills are in such competitive demand that companies will invite a bevy of beauties to glamorise a dull tech event and attract the prized programmers to work for them. Whilst wildly politically incorrect, who am I to criticize another culture or be an ungrateful guest? As a keen observer of humanity, I watched in fascination the human interactions unfolding around me.
The geeks were simultaneously attracted to the models but repulsed by the idea of a social conversation with them. Not much conversation or anything else transpired as was predictable. If they had communicated, they would have found each other more down to earth and nice genuine people than they either looked like or acted like. But that was not to be, and everybody’s awkward silence was fortunately masked by someone cleverly turning up the music to a level where conversation was inaudible anyway, which made the whole experience much more pleasant for all. One can only hope the big banks have their work equivalent of that loud music, which makes putting such a disparate group together more enjoyable.
Now let us leave these figurative gilded halls of power and amusing parties, to return to the more mundane story setting of Paul’s apartment with his Minions. A couple of months ago Amazon released some fun new technology in beta test version and Paul was quite excited by what he saw as some technology with revolutionary potential. After a few Sundays playing with it, he had proudly become one of the Amazon platform’s top five global technical contributors.
Keen to try it out, he left a client literally speechless by in just one afternoon using the solution to automate a complex business process which would normally have taken a team months to complete. His excitement was contagious, so I reached out to the local Amazon office to share our enthusiasm. Whilst Paul was having animated discussions with Amazon’s equally enthusiastic regional team about the technology, I was much more interested to hear about the human culture within Amazon.
Now Amazon has recently become the largest most successful company in the world, something that one has to assume does not happen by accident. These guys totally get it right in the modern world, which is unusual but an honour to behold. There are many elements to their culture, but their concept of “frugality” resonated most strongly.
Their mission is to solve the world’s challenges by creating technology platforms and their method is to intentionally under-resource any efforts to do so. If a team is tasked to think of an entirely new solution, giving them too many resources will create lazy thinking and waste. The answer will by definition have to be innovative, so the head on approach is unlikely to work. Frugality forces people to creatively find their way around a problem.
Another great thing about their approach was being so clearly focused on results without getting distracted by where the results were coming from. Even as the largest company in the world, there was not even a trace of arrogance, selfish ownership or politics.
We shared their passion for a genuine technology solutions and that is enough for them. They have achieved that elusive goal of keeping the start-up culture needed to survive a world of transformation, whilst also being massively commercially successful. Their brilliant approach and culture are the exact opposite to the dysfunctional big banks. Kudos.
Using a similar creative philosophy, Paul would not take “no” for an answer from some AI doing random walk trying to predict financial markets. Whilst our imaginary friends at the big banks were still bashing their heads against the brick wall of random walk, spending billions in the process, we fortunately did not have that luxury. Similar to Amazon’s frugality principle, he had to walk to the end of the wall to see if there was an easy way through it. Not only was there a way through the wall, but there was a figurative door with a big flashing neon sign above it saying “solution this way”! We proceeded through that door and were left blinking in the figurative sunlight of the other side not quite believing we had done it.
We had cracked the method to get AI to meaningfully make a consistent profit from trades. It involved bolting together a bunch of existing best in class platforms, simply because we could not afford to develop them ourselves. We had to be open minded and found one key element in a totally obscure place being used for something else. Layered on top were a set of trading algorithms and custom code which were not so advanced that others could not think of them.
But the result was hundreds of trades each with a positive profit, resulting in a stable 78% annualised unleveraged return on capital. A few other obvious tweaks like increased trading frequency, cleansed input data and price chasing execution pushed the results higher towards annual doubling of investment.
We accidentally pointed the system at the wrong fund when seeing if it worked on precious metals markets, with some random Israeli commodities tracker fund producing 172% return in 17 days. No idea why, but jolly good fun. We naturally put our money where our mouth was, investing in the crypto trading which we knew best, watching in half belief the results. This will buy a whole bunch more Minions to keep the original crowd of them company. The financial returns also give us the luxury of “telling it like it is” in this article.
But here we come to the crux of the whole matter. How could someone playing around on Sundays produce a better result than the big banks and all their resources? This led us straight back to the conspiracy theory that the big banks had cracked it and the billions were being secretly used by the aforementioned lizard aliens to take over the planet.
But surely regulatory reporting requirements, commercial competition and market transparency would foil such reptilian invasion plans. So we are back to the conclusion that we had genuinely done something that the big banks had not, which is unexpected pleasant surprise, but as a hypothesis involves less lizards than the alternative explanation.
Let us take a deep breath then and recap here. The limitation in getting AI to predict markets was not caused by the AI’s weakness but was caused by human nature. A computer will do what it is told to by a human, who then has to be able to understand a correct answer when it is given. Everybody assumes the big banks will be closest to the answer because they have thousands of people and billions of dollars to throw at the challenge. But the truth is counterintuitive – they cannot reach the answer because they have thousands of people and billions of dollars.
They are incentivised to make it as complicated as possible and totally confuse their AI with too much data. They re-invent every wheel imaginable and lack the humility to look beyond their teams for existing solutions. Their self-interest and arrogance lead to them ignoring everybody except their big bank peers, who are all busy making the same identical self-inflicted mistakes.
Everybody assumes the challenge is a technology one, when in fact it is has a very human solution. Amazon and their concept of frugality are right, as that is the way us humans become creative when we have to. The solution is possible with suitably organised, creative and motivated people, effectively interacting with the most modern technology, regardless where it is from.
So we had found the holy grail, but what were we to do with it next? We told the Minions the good news and we like to think they were suitably impressed. Being stuffed toys, it is difficult to gauge their reaction, but we took their silence for approval. Telling complicated humans is a whole different game we knew. Paul said surely this would be as easy as giving away gold bars. Not so fast I said. This is the opposite and is about as attractive to a big bank as being sold a whack around the head with a baseball bat.
Imagine the scenario if I approached the head of AI research at a big bank. The technical solution works but that is irrelevant as the decision maker is a human, not their AI. In human terms, what I would in effect be selling is “I propose to humiliate you by destroying the very core of your self worth, destroying your reason for existence, ending your career, removing your livelihood, putting your family at financial risk, trashing your reputation, dismantling everything you have built and making it your fault that the thousands of people who depend on you also have their careers derailed”.
Not the best unique selling proposition to say the least. Being a nice chap, I would not even consider having that conversation in the first place, as it would unnecessarily upset everyone.
Is then talking to big banks automatically a waste of time? Not necessarily. A few years ago, we were a winner at a fintech competition for a completely different foreign exchange software product, which was welcome enough in itself, but gave me the chance to speak with some regulators from a number of state banks who comprised the judging panel. We philosophically discussed how disruption works in reality in the world of finance.
Many a starry eyed fintech start-up dreams of disrupting the market, but this almost never happens in reality. The state banks made one excellent point on this topic – the big banks have such a concentration of capital, resources and experience that it was unrealistic to replace them. The state banks’ aims were to encourage the big banks to modernise to improve the existing system from within rather than external disruption. As one can imagine, financial markets and external disruption are not a happy mix for regulators seeking stability.
The method chosen to encourage reform was a good well thought out effort in Vietnam at least. Top fintech start-ups from around the world were assessed and the best invited to Hanoi, where the top ten were paired with ten Vietnamese banks. The start-up and bank had to then co-present to a panel of state bank representatives how they would roll out a new technology. I ended up on stage with the deputy CEO of one of the largest local banks which was fun but eventually fruitless. The banks had paid lip service to new technology but were quite comfortable without it quite honestly.
The same principle would apply to the any big bank in the world. Essentially, business is good for banks, so why fix something when it is not broken. When your business model allows you to literally make your own money plus having government guarantees protecting from risk, not too much can be truly broken. So imagine presenting our 78% return algorithms to a big bank CEO.
In theory the CEO has a responsibility to maximise returns to shareholders and customers, but in reality, they are not actually personally incentivised to do anything of the sort. They are already well compensated and if they keep on a steady path, they will ensure that generous compensation for years to come. Should a market downturn cut their career short, then it is the market’s fault not theirs, so a generous golden parachute and comfortable retirement awaits in any event.
Now imagine yourself in a big banks CEO’s shoes when presented with a new technology or idea. If you ignore it, your life is guaranteed to continue being good. If you decided to take a risk on a new technology, it could go well or it could go badly. The upside would be nice, but the downside would really mess things up for you.
If you took a risk and it all went wrong, it would be your fault, damaging your reputation and finances. So on balance the far more pragmatic human decision would be to do nothing – which is precisely what happens, as that is for the CEO the best rational personal choice. You would understandably make the same decision in their position.
So is there anyone else in a big bank who would consider a 78% return, if not the CEO or head of AI research? Well not really – it basically is none of anybody else’s business. They would either not have a clue or a care. I do occasionally meet people working with the big banks who lament the cynical predicament they find themselves in and say if the system allowed them allowed, they would love to see the technology, customer service and efficiency breakthroughs they know are possible actually happen.
They “get it”, but also as pragmatic humans know they have mortgages, school fees and the like to pay so keep their head down and get on with maintaining the status quo. They know it is ethically wrong, but the practical incentives are more powerful. The same even applies to non-executive boards who are by nature pragmatic people who support the “party line” – otherwise they would not of course have made it to such an esteemed role in the first place.
What would a successful ethical technologically advanced new bank would actually look like? Is it even possible to do it right? Well, yes. Imagine all HR systems and bonuses all exclusively aligned with customer and shareholder interests. Only results would be rewarded, regardlessly of how long they took or how simple they were. Rather than jealous bureaucracies, there would be ecosystems again selecting only those resources best aligned with clients and investors. Above all there would be genuine innovation, adoption of technology and an Amazon-like frugality culture. Possible yes, but likely no.
So is there anybody out there who would be interested in a 78% return? Surely an actual investor would be interested? Well yes, they would in theory, but again human nature comes into play. To put your hard earned money into something, you have to have a high level of trust. To accumulate money, you have to have your wits about you and that will involve being a good judge of character and having a nose for avoiding scams.
You know full well that if something is sounds too good to be true, it probably is. Most high net worth individuals have had a brush with scammers and even if they do not get ripped off, they find the whole experience of dealing with the lowlifes involved most unpleasant in itself. So one develops a healthy aversion to anything that smells a bit fishy.
A potential large investor prides themselves on high emotional intelligence and ability to spot scams. So anyone approaching them with something that seems too good to be true will either be ignored or be subjected to a laser beam intense scrutiny of character. The investor would be applying sophisticated human judgement skills to make a decision. This is literally hard wired evolutionary human nature that has been effective for millennia so why on earth change that?
Well this is where technology has genuinely changed the game. If I were to approach an investor offering a 78% return, then their instinct is to apply the human trust checks that are so well known to see if what I am saying is true. The veracity of the message depends on the messenger surely?
However… what if we are dealing with a mathematical construct run by an AI? A mathematical algorithm applied to a financial market either works or it does not. This is easy to verify by simply looking at measurable indisputable results. Mathematical formulas do not have personalities or trustability factors. They are cold hard impersonal facts. One plus one equals two no matter who writes down that fact. It is not something happy or sad, good or evil, or anything possessing any identifiable emotional characteristic. It just is, or it is not. Human trust is not a factor.
But we are still human. When presented with a fact by an AI we are still instinctively tempted to apply our trusted human interpretation filters. It is humanly difficult for us to think like the AI. The computer is not just speaking a different language we could adapt to, it is something entirely alien to what we are. Just as there is a difference between humans understanding each other by language and dogs by understanding each other by sniffing backsides, AI is totally different again. It is possible, but takes a complete change in mindset – to become inhuman at least in that context. Not easy.
So the rational thing for an investor to do would be to test the 78% algorithm in a paper trading environment to see if it works or not, regardless of any human factors. But that in itself is as described an inhuman thing to do. It goes against every instinct we have. Rather than upset ourselves with such a profound challenge, it is much easier to put any such unfamiliar concepts in the “too hard” basket. There are exceptions to this, but they are far and few between. A true visionary or particularly ethical wealth advisor may actually do the right thing, but this is rare.
I judge many start-up competitions and see some really brilliant ideas amongst some really dumb ones. I would recommend judging competitions as one of the most entertaining uses of time. The brilliant ideas are often presented by idealistic youngsters who know that their technology could really change the world. But I already know that it will not in reality, for all of the human incentive factors I have described. I give words of encouragement and am genuinely inspired personally, but that will not magically change human nature. I take the good parts of what I see – smart young people enthusiastically trying to make a difference and accept that for the good it is, without greedily expecting more.
I know the venture capital sponsors and observers may go for the occasional low risk incremental innovation or copycat product, but am realistic to why genuine change will not occur. I can assure you the technology advances that will be implemented in the next decades are already here. It is absolutely exhilarating and fascinating to hear about them, so again I would highly recommend judging start-up competitions. You will thoroughly enjoy it if your expectations are managed; and through an understanding of human nature you are not actually expecting investors to jump at the opportunities.
After all of these considerations, we are presented with a stark picture of how human nature stifles innovation. We see how the different characters will not seize opportunity through perverse incentive. But there is hope – there are companies like Amazon out there. Some people do get at least part of what the technology change means. Not as far as jumping on a 78% return but at least understanding how the new transformational paradigms affect their day to day lives and jobs. One hears about the new “21st century cognitive skills” which largely centre on empathy, creativity and lateral thinking ability. Those that have those skills tend to do much better, leading to a world of increasing wealth inequality. With even the current moderately slow pace of change causing disruption and inequality, one cannot blame people for being apprehensive of faster change.
A couple of years ago I asked a top AI programmer from Silicon Valley what the most important human skill was to interact effectively with the new possibilities presented by AI. His answer of “humility” surprised me and took me a long time to fully understand. Basically, AI is totally alien to our human way of thinking and it will present answers so unexpected and out of left field we struggle with comprehension.
When hit with a curveball AI answer we have to have the calm humility to accept it at face value. Human rationalisation and trying to work out the reason for the answer are irrelevant intellectual indulgences which have nothing to do with getting on and implementing the answer. The answer just is what it is, and needs actioning no matter how counterintuitive it feels to us humans. We do not need to analyse it, question trust or apply emotion – we just need to get on with it.
Now many a breakthrough will come through AIs finding answers, like our 78% return. If we think of who is best placed to have the humility to handle the answer, we find some rather amusing mismatches. Imagine venture capitalist investors and their propensity for humility.
They have risen in their profession through chutzpah, bravado and force of character. Their psychology is driven by self-assured ability to judge others and this ego has only being re-enforced by countless supplicants petitioning them. Their skillset is the exact opposite of the humility required. They would not know what to do with an AI solution, as it is an anathema to the type of person they are.
One can imagine taking a venture capitalist investor aside for a lunch to explain to them that the secret to their future success was humility. One would carefully explain that their brash judgemental attitude to life was in fact wrong. They would see this as a gross affront to their worth and would seek immediately to show who is boss. Their answer to the gentle humility recommendation would be somewhere between “how dare you”, “do you know who I am” and “know your place dawg”. Whereas some of the other meetings we have imagined would be unnecessarily painful, this one could actually be quite amusing. Very tempting.
It is not surprising that against this inefficient backdrop, innovative start-ups dream of revolutionary disruptive revenge. They will show that ungrateful lot! Being natural non-linear thinkers, they will take a non-linear path direct to the end consumer, trying to bypass obstacles that way. Occasionally that can work, but most of the time just leaves them wandering off the track and shouting frustrated to an empty wilderness.
This revolutionary approach tarnishes what could have been a good original business idea. The approach exacerbates and amplifies all of the negative human reactions described earlier, actually lowering any chance of success. Plus the territory off the beaten track is unfortunately populated by many of the types of schmucks that our mothers were right to warn us about. Life is not easy.
If life was easy, we would all be rich. I have marked a map of the multiple obstacles caused by human nature inhibiting innovation. To be fair, our natural caution toward change which is not all bad. For example, the possibility always needs considering that the aforementioned conspiracies are true, in which case meeting a grisly end being eaten by an alien lizard would not be a desirable outcome. Us humans have a healthy evolutionary fear of the unknown and it is only by dint of not being previously eaten by lizards that we can ponder such questions in the first place.
So it is not the AI’s fault. But maybe we should not blame human nature as we are humans. To keep the peace, let us agree that it is the Minions fault. To look on the bright side, those pesky Minions are at least better than lizards.
– This story is contributed by Colin Blackwell, founder of tech startup social enterprise, chairman of HR committee of World Bank’s Vietnam Business Forum, founder of Vietnam’s HR associations and ecosystem builder.
- VN aims for 100000 digital technology companies by 2030
- Khanh Hoa will have 5 digital technology enterprises by 2025
- Made-in-Vietnam AI platform debuted
- Nhan Dan 115 Hospital sets Asian records in applying AI technology
- Hanoi to host international workshop on oil recovery technology
- Vietnam strives to have 100000 digital technology companies by 2030
- “Make in Vietnam” digital technology product awards launched
- Vietnam participates in the technology conference "2020 Collision From Home"
- Nexttech poured half a million USD into Vietnam’s artificial intelligence startup