In 1990 Kurzweil instantly incubated the way we think about Artificial Intelligence (AI) with his work The Age of Intelligent Machines. While there is now, almost 30 years later, still a long road ahead of us, the technology readiness level of AI is getting significantly closer and many applications are trying to implement AIs state-of-the-art features and starting to accelerate the creation of the Smart Business: AI-enabled organizations that thrive digitally, are hyperconnected (both digitally and physically), use machine learning and cognitive techniques to work smarter and that are increasingly becoming autonomous organizations. In his 2016 book the Fourth Industrial Revolution Klaus Schwab mentions 6 basic technologies that are based on AI and currently impacting business: 1) the Internet of Things (IoT), 2) Autonomous Vehicles, 3) Advanced Robotics, 4) 3D-printing, 5) new materials and 6) the biological revolution.
But while AI may speak to our imagination, it is in fact one of the slowest developing fundamental technologies and its current state is still highly debatable. Last week, on October 11 and 12, over 2000 professionals in AI gathered in Amsterdam at the World Summit AI 2017 and discussed the state of Artificial Intelligence and Machine Learning. Among the keynotes were Google, Intel, NASA, ING, Airbus, Facebook, Booking.com, Facebook, IBM Watson, Amazon, Alibaba and Uber. Or as Ebay CPO RJ Pittman said: “The brains in this room are worth trillions of dollars.”
These are some of the most important insights I deducted from the different keynotes, workshops and many people I met.
1. Anno 2017, Artificial Intelligence equals Machine Learning.
While AI has been around quite long – its main incentive is to simulate human thinking and human behaviour – the current applications of AI mostly just focus on Machine Learning. And Machine Learning is, as one of the keynotes pointed out just a form of very advanced programming, not AI per se. Or to say it more clearly: “what we are currently experiencing is not ‘general AI’, it’s just a lot of machine learning on big data”. The screenshot below is from a presentation of Google and show where Google is currently working on when it comes to AI:
2. From Machine Learning to Cognitive Learning
The problem with Machine Learning is that it focuses on learning machines how to think or behave like humans in stable environments, with repetitive information. So it works well for for instance face recognition or speech analytics (faces and language don’t change). But this is not how humans think. We also know what to do and what to think in unique situations. This asks for a complete different approach to AI, one that is called Cognitive Learning: a dynamic approach to data analysis and ‘intelligence’. With Machine Learning companies would be able to automate (routine) processes, but with Cognitive Learning techniques companies would be able to autonomously create new ideas, new inventions and new innovations. Images from the presentation of Gayle Sheppard, Intel.
3. We are wrong about the objective of AI
Currently, almost all AI is preprogrammed to attain a certain goal. But that’s a huge problem. Because when AI works well, it will thus try ‘to keep alive’ in order to attain its goal. So if humans put on off-button on the machine, in order to be able to still control it, the machine’s first action will actually be to break the off-button, because it thinks that it will likely get in its way when it tries to achieve its preprogammed goal.
What we actually should try to do is to learn AI that it has only one goal: to maximize the realization of human values. In order do that, it needs to be much smarter. It needs to ask us what‘s right and what’s wrong. It needs to ask us what we want. It needs to learn from our behaviour (see point 2) in order to behave adequately.
But then again, the question raises to what extent it should maximize our own believes. Too less or too much will result in absurd situations, such as these examples:
4. Next steps in AI
According to Ronny Fehling, Airbus, there are a few epochs that describe the road ahead for AI. We’re currently at Epoch 1 (using historical and operational data) and trying to get ahold of Analytical data. But we’re far from the next steps: predictive algorithms for business, descriptive algorithms for business (Epoch 2) and explorative algorithms for business (Epoch 3, more on that later).
5. We need Open Data. But we won’t get Open Data.
In order to get historical data (see point 4) and actually be able to learn from that (see point 2), we are in hard need of open data. There are many written and digitalized datasets available, that we can use the analyze human behaviour to understand our values better (see point 3). But the problem is that personal data will be much harder to get, due to tough (and rightfully so!) legislation on data. Or as Miles of Google said: ‘other companies will never get access to our data. Our data is our business. We will decide what you can retrieve from our data (through machine learning) and how you can achieve the data (through APIs).
6. AI is as good as the data it’s built upon
Because the current state of AI is that it is mostly based upon Machine Learning, and ML only functions when it can analyze repetitive and structured information, we could argue that AI is only as good as the datasets that it uses. And every company that tries to get into AI should therefore first extensively organize and structure its datasets. Keynotes that got into this fact were Airbus and SwedBank who argued that without thorough data they would never have made steps into AI.
7. The gateway to AI are APIs
As argued in point 5, the current gateway to AI for most companies are APIs: Application Programming Interface; small pieces of software that understand the underlying machine learning algorithms. This way, complex machine learning algorithm will become available for software programmers around the world: they can build dedicated apps. These apps can be digital (chatbots, software, business intelligence, intellegent systems), but can also be phyical (advanced robotics, biotech, internet of things, autonomous vehicles, and so on).
8. The misuse of AI is cybercrime on steroid
As one of the keynote speakers, Parry Malm, mentioned during the conference: “The misuse of AI is a cybercrime problem on steroids. And at the moment things are not going too well with cybersecurity.” It is a known fact that, although with products like Intel Saffron’s Anti-Money Laundring algorithms, criminals are in many ways always a step ahead of regular businesses and misusing the possibilities of Artificial Intelligence to their own benefit. While AI is hot, cybersecurity is much less so, but should be on the top of each companies agenda. Check this [Cyber Trends Index] of Owlin, another company that presented itself at the conference.
9. AI isn’t gender-neutral, and that’s a serious problem
Thursday early morning, there was an interesting panel discussion about AI and gender neutrality. The discussion started off with a few good examples of the fact that most AI-driven applications developed for both professional and personal use are created and developed for males rather than females. But the discussion then took another approach and discussed a much more serious problem: as a result of the fact that in the 80’s and 90’s earliest (game) consoles and PC’s were mostly branded as ‘toys for boys’, an unproportionate amount of males started studying computer sciences. A shift towards a ‘male industry’ started to happen and is still going on. Take the example of this conference, an approximate estimate would say that 95% of visitors are male. And that causes a serious, and ethical, problem – because Artificial Intelligence is not gender-neutral. And with artificial general intelligence in foresight, that would mean that a one-sided approach to intelligence could have an enormous impact on society – while a diverse approach would be much more needed. Interesting discussion: will the future artitificial intelligence be male or neutral?
10. AutoML and Autonomous AI
Many companies, amongst which Google, are taking steps on what they call AutoML, the machine learning on machine learning. In essence, that means that it won’t be necessary to create more machine learning algorithms, but that more intelligent algorithm learns about the effect of its own predecessors and automaticallys starts to create better algorithms. This technique, though still far off, could be a path the artificial general intelligence.
11. When artificial general intelligence starts supervising AI: singularity
As always, there was a lot of reference to technological singularity: the moment when artificial general intelligence becomes more sophisticated than human intelligence. General consensus is that we’re still far away from singularity, but that we’re getting closer. Some advanced robots already have the function of controling or supervising other robots. Though only in its own specialization, this is a form of singularity within that specialization.
But we’re much closer to singularity than you might think: in a keynote of NASA, one of their most recent projects is creating an interstellar space station, that will take dozens of years to reach its destination. Because communication will be extremely delayed, and humans won’t survive for such a long time, the ship will contain AI-driven robots that gather research data. But the fun fact is that these robots will have to sustain on itself for a long time, they will have to find new ways of doing research based on what they find, they will have to deal with unforeseen circumstances, and so – all without human intervention. Although interstellar, this is definitely a form of singularity. If they run into extraterrestrial life, what will they do? What decisions will they make? What will we think of those decisions when we finally hear about them years after?
All in all a very inspiring and interesting conference!