The three streams of opinions about AI
There seems to be three competing ideas about AI as to what it means for our civilization. There is the positive view of the Silicon Valley optimist that believes all technology ultimately is good for humanity and that AI technology will turn out to be extremely positive for our society. People in this camp are Yoshua Bengio (professor and computer scientist) and Yann Le Cun (Chief AI Scientist at Meta). Sam Altman, the co-founder and CEO of OpenAI which is behind ChatGPT, was initially also in this group and this is confirmed by OpenAI’s aggressive bid to release ChatGPT despite many ethical concerns over the technology. Sam Altman has since balanced his initial one-way positive attitude towards AI (see next section on AI regulation).
In the opposition camp we find the pessimists that are toying with the ideas that AI technology could lead to the downfall of our species with most notable people in this camp being Max Tegmark (professor at MIT) and recently Geoffrey Hinton, one of the early and major contributors to the current AI field, has turned significantly negative on the prospects of AI for humanity leaving his position at Google to participate in the formation of public opinion on the subject.
The last camp represents a group of scientist concerned about causality and this group claims while current AI systems are impressive in many ways they are still just correlation machines and thus are not able understand our world in any causal way. Microsoft scientists experimented with the new AI systems late last year asking the AI to stack in a stable manner a book, nine eggs, a laptop, a bottle and a nail which requires an understanding of our physical world. The answer was clever and the scientist suggested that maybe they were witnessing a new kind of intelligence. Later a group of AI sceptics added a bit of variation to the same question and saw immediately that the solution the AI provided showed that it has no understanding of the physical world.
AI regulation is coming
Sam Altman participated yesterday in a US Senate committee hearing on AI discussing many themes related to AI from regulation to copyright models. Sam Altman said that government regulation of AI is crucial to avoid the technology to become a runaway train and believes in a government AI licensing model. He also said that OpenAI is not making any money and that every time someone is using ChatGPT it loses money. He also said that he worries about the technology and especially how it could harm children.
After the committee hearing Senator Blumenthal said the US Congress cannot be the gatekeeper of AI regulation and the FTC does not have the capabilities to it, and finally AI regulation should be part of a broader technology regulation. There is no doubt that regulation of AI is needed to ensure that it is used in a correct manner and not for harming society, but with regulation comes the potential of regulatory capture by big firms and limited competition if not done correctly. Regulation has the potential benefit for the firms involved in AI that it will increase the barrier to entry and thus improve profitability.
Sam Altman also talked about the generative output of AI systems with OpenAI’s Dall-E 2 AI image generator being able to produce images from text input. The generative AI comes with two risks. The first one is the risk of copyright infringement and the lack of pay for artist as their original art has clearly been part of the training of the Dall-E 2 system. Sam Altman said that OpenAI is working on a copyright system to ensure payment to artists. The other risk from generative AI is that it will flood the Internet with AI generated content which then in the future will dominate the training samples of future AI systems. The question is whether that will naturally lead to a plateau in the development of this type of AI systems. For thing is for sure that AI will remain the most debated topic in 2023 among regulators and investors.
5-year price chart of Microsoft and Alphabet (Google)