Last few days have been quite bamboozling.
The world didn’t know what to make of Geoffrey Hinton’s ominous claims “It is like aliens have landed on our planet
and we haven’t quite realized it yet because they speak very good English”, and
unbecoming of Wittgenstein these aliens understand us quite well through our
languages so as to promptly plot to “wipe out the humanity” -and believe me it
does sound chilling in nonchalant british accent. He even quotes French
philosopher Blaise Pascal as an afterthought -oh by the way “if you have two
alternatives (i.e. AI killing humanity or not) and you have very little
understanding of what is going on, then you should say 50 per cent.’ So that’s
Geoffrey Hinton, a respected and reputed Turing awarded researcher (incidentally his
grandfather George Boole formulated Boolean algebra that lay the foundation for
entire digital age), who resigned from Google with this very intention,
to warn the unsuspecting world of looming AI apocalypse, something that market
media cherishes (atleast here in India they are always ready and looking
forward to doomsday scenario, stock pictures and videos of terminators are
edited with foreboding soundtrack to rivet the herd). He asserts that we are in
new stage of evolution “when
biological intelligence gets replaced by digital intelligence”. Hinton is
described as godfather of AI, and is behind the recent breakthrough in AI
through Large Language Models that is based on neural network and deep
learning (back propagation algorithm, which
allows AI to refine and extract patterns and concepts on a vast quantity of
data, that is, learning based generative AI as against reason/logic based) that changed the way machines see the world. Though neural network
has been area of research since 1950s but was at technological dead end, hence
mathematical system emulating human brain was considered bewilderingly
impossible so much so that even the word “neural network” was seen as
offensive. Hinton doggedly pursued with minimum of facilities and research
students aiming to create machines that could not only recognize objects but
identify spoken words, understand natural language, converse, and solve
problems humans couldn’t solve on their own. In 2012 they presented paper with the
breakthrough claim (incidentally 2012 is also the year in which another
breakthrough technology, CRIPSR Cas9, paper was presented, and those keen on doomsday a movie named '2012' -mayan calendar whatever). Understanding the
significance of this revolutionary technology big tech companies (including the
Chinese) bid to acquire through close door auction. Finally, they went with
Google but by now others were aware that gamechanger was on the horizon so the
world was set for an AI race fueled by generative algorithm. Startup DeepMind was
acquired by Google while OpenAI by Microsoft. OpenAI eventually brought out
ChatGPT. Those who were keeping abreast with the latest in this field are aware
that much before ChatGPT there were indications of something significant at
work. I am sure some us have watched the documentary AlphaGo (2017) that
defeated world champion in Go game, what was interesting was the emotional
response of Lee Sedol -a sensitive fellow who eventually retired from the game,
the counter intuitive moves were not accidental but reflection of something
deep at work. Anyone who has watched AlphaZero (AI trained by playing with itself instructed on basic rules and objective function of winning with no human input) play with StockFish (till recently the most powerful
chess engine that was trained on human games) will know how spectacular neural network based deep
learning is. Its counter intuitive strategy makes moves that no human ever
made, it brings out pattern beyond human comprehension, and could only be known
once the game is over for you to analyze. It’s spell binding.
So, has AI acquired emergent
properties? The concept of emergence
has been in use in science for decades, it means complex, unpredictable
behaviors, emerging from simple natural laws. Emergent phenomena are ubiquitous in
nature (indeed nothing in science makes sense without emergence) and a proper
grasp of how they come about could hold the key to solving some of our biggest
mysteries. Nobel
laureate physicist Philip Anderson worked the idea of emergence as quantitative
changes that can lead to qualitatively different and unexpected phenomena. For
AI LLM systems (which are very large transformer neural networks, often
spanning across hundreds of billions of parameters, trained on hundreds of
gigabytes of text data) it means abilities not present in smaller models but
present in larger models, unexpected and unintended abilities, to
elaborate further, impressive abilities
gained by programs that they were supposedly never trained to possess, that is,
seeking to transcend for new functions. So, as a dampener, a paper (yet to be
peer reviewed) by a team of Stanford scientists, argue that the “glimmers of
artificial general intelligence (AGI)” we're seeing are all just “an illusion”.
They found that when “more data and less specific metrics are brought into the
picture, these seemingly unpredictable properties become quite predictable”.
The researchers argue that "existing claims of emergent abilities are
creations of the researcher's analysis, not fundamental changes in model
behavior on specific tasks with scale”. In simple terms it means that it is borne out of inherently flawed metrics. Soon enough some
have gone for the attack, this from an article I read the other day “Claiming that complex outputs arising from
even more complex inputs is ‘emergent behavior’ is like finding a severed
finger in a hot dog and claiming the hot dog factory has learned to create
fingers”. “Modern AI chatbots are not magical artifacts without precedent in
human history. They are not producing something out of nothing. They do not reveal
insights into the laws that govern human consciousness and our physical
universe. They are industrial-scale knowledge sausages”. Even the claims of ChatGPT
learning new language, Bengali, on its own is being questioned. There are
allegations of market ploy to get attention and investors. It surely is getting
nasty. Neural network based deep machine learning generative algorithm can
provide solutions to lots of our problems, and may even bring out hidden logics
of fusion, but to claim that it will provide solutions for climate change or
biodiversity loss is spurious. Scientists and researchers have it well documented,
AI will only make it much clearer but that shouldn’t be an excuse to distract
from fossil fuel. There is already an attempt to distract towards fossil fuel emissions
in forthcoming COP28 -a meet which is already doomed.
So
where is the truth? The truth is not in the extremes. There are enough evidences of
emergent ability in AI, atleast in narrow discreet setting. Halicin, protein
folds so on are some brilliant examples of AI probing reason beyond
our comprehension. The brilliance of few simple rules of chess or Go that created
unexpected complexities of moves or reasons in black box that we are not privy
to. Clearly very complex is being solved with a relatively simple algorithm. This
is emergent ability. The problem is when you confine AI/generative
algorithm to LLM, as also mess up with claims of AGI, conscious, sentient and
what not. On its own LLM models like ChatGPT4, PaLM, BARD so on are quite spectacular
achievement, one needn’t spoil it with claims of AGI. There is digital intelligence
that is created when billions of parameters and gigabytes are involved, and with
multimodal input through everyday talk, videos, converted into texts for LLM, meaning
sights and sounds are accessed, and very conceivable for other senses be activated.
It is
conceivable that LLMs develop an internal complexity that goes well beyond a
shallow statistical analysis. It will be certainly much more than a stochastic
parrot, and carry complexity to build some representation of the world. Language
have evolved through thousands of years of iteration and has logic of thought that
is based on simple rules of grammar but meanings are approximation of logic and
context that may not be expressible or confined through grammar. Powerful AI is
well placed to access this black box logic of thought. Hinton is right
there is an alien
that is the sum total of our language.
One would also agree with Hinton that
digital intelligence is better than biological intelligence but whether
biological intelligence is transition towards digital intelligence is
questionable. Digital
intelligence requires much more energy but is shared across entire networks and can process much more data than we can. It’s efficient hence better that surely
doesn’t mean prescient. Biological intelligence is byproduct of millions of
years evolution and in necessary iteration through senses to surrounding. Mutualism
and symbiotic relations have significantly contributed to evolution of life. There is a compelling argument that is put
forth: if submarines do not swim like fish and airplanes do not
fly like birds then why should computers think like humans? Quite true, and
black box logic is ample proof of it. But then fishes and birds are not only swimming
and flying they also think and live a complex life. Logic is only a part of
thinking, and complexity of human thinking is a wonder despite low memory and
weak computation. Humans will not be able to identify intricate pattern in
complex iteration of powerful computations but they can create complex associations
without any pattern, intuit and use emotion to evaluate. Though evolutionary algorithm (also algorithms from nature like swarm intelligence, and for instance light weighting algorithm to optimize energy and resource from as simple an organism as slime mold) adds to complexities of digital intelligence, in few iteration that has millions of years of making, it is fixed on what exist today as determination of collective intelligence
pattern on which to iterate learning. This wouldn’t factor breakthrough ideas
that define human progress. Nevertheless digital intelligence has lots of possibilities
and danger too. It needn’t be smarter than human to create trouble. Consider a
virus it doesn’t really have the kind of biological intelligence nor consciousness
that humans have but still are lethal and can wipe out substantial human
population as also severely strain and collapse human created systems. Even a
single cell life form like virus or bacteria that doesn’t match human intelligence
have sophisticated attack and defense systems (that even question anthropomorphized
ideas of intelligence). You just have to look at amazing array of
bacteriophages and bacterial defense like Crispr. Biological intelligence had
luxury of billions of years to evolve. Digital intelligence has deep learning memory
with unmatched computation power that will grow exponentially it is therefore
expected that it delves into complexity of pattern and show emergent ability to
generate logic and surprises that takes us closer to nature of reality.