How humans have imagined, created, and interacted with intelligent machines throughout history

In the grand theatre of human achievement, few actors have made an entrance as dramatic, controversial, and downright audacious as artificial intelligence (AI).

Diving into the labyrinth of artificial intelligence’s history and its probable future is akin to embarking on a time-traveling escapade, where the line between science fiction and reality blurs faster than a quantum computer solving a Rubik’s cube.

Imagine, if you will, a world where machines not only perform tasks but learn, adapt, and evolve. A world where your toaster might one day outsmart you at chess, and your vacuum cleaner could pen a sonnet to rival Shakespeare.

Welcome, dear reader, to the thrilling, terrifying, and utterly captivating world of artificial intelligence.

Ancient myths and legends

The idea of creating artificial beings that can think and act like humans or gods can be traced back to ancient myths and legends from various civilizations, long before the term “artificial intelligence” was coined. 

In Greek mythology, the god Hephaestus was the master craftsman who created mechanical servants, such as the bronze giant Talos, who guarded the island of Crete, and the golden maidens, who assisted him in his workshop.

In Hindu mythology, the king Ravana had a flying chariot called Pushpaka Vimana, which could navigate autonomously and follow his commands.

What is AI?

To begin a discourse on artificial intelligence, we must first define it, and to define "artificial intelligence", we must first explore the definition of the word, "intelligence".

The term “intelligence” is derived from the Latin word “intelligentia”, and it has been defined in many ways. According to the Cambridge Dictionary, intelligence is the ability to learn, understand, and make judgments or have opinions that are based on reason. It’s about grasping truths, relationships, meanings, and more.

AI is the branch of computer science that aims to create machines and systems that can perform tasks that normally require human intelligence, such as reasoning, learning, decision-making, perception, and natural language processing. It essentially aims at human mimicry by computers - the simulation of human intelligence processes by machines, especially computer systems.

The Organisation for Economic Co-operation and Development (OECD) defines an "AI system" as:

 "A machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."

This definition, although a lot broader than some AI experts believe it should be, underscores the role of input in operating AI systems and the ability of these systems to influence both physical and virtual environments. 

It's a broad definition that encompasses many different types of AI systems and has been recently adapted into the European AI Regulation - the EU AI Act.

AI isn’t a new concept; in fact, it has a long and fascinating history that spans diverse cultures, disciplines, and domains. In this blog post, we’ll explore some of the key milestones and developments in the history of AI, from ancient myths and legends to modern applications and challenges.

The birth of modern AI

The term "artificial intelligence" was coined by the American computer scientist, John McCarthy, in 1956, when he organized a conference at Dartmouth College and invited a group of researchers who were interested in creating machines that could simulate human intelligence. 

The conference is widely considered the birth of modern AI, as it marked the beginning of a new field of study that attracted funding, talent, and attention. 

Some of the attendees of the conference were Alan Turing, Marvin Minsky, Claude Shannon, and Herbert Simon, who later became influential figures in AI research.

One of the early achievements of AI was the development of the Logic Theorist, a program that could prove mathematical theorems using symbolic logic. It was created by Allen Newell, Herbert Simon, and Cliff Shaw in 1955. 

Another milestone was the creation of ELIZA, a natural language processing program that could mimic a psychotherapist, created by Joseph Weizenbaum in 1966. 

ELIZA was one of the first examples of a chatbot, a computer program that can converse with humans using natural language.

The rise and initial decline of AI

The 1960s and 1970s witnessed a rapid growth and expansion of AI research, as many subfields and applications emerged, such as computer vision, speech recognition, knowledge representation, expert systems, robotics, and machine learning. AI also received significant support and funding from the military and the government, especially during the Cold War and the Space Race. 

However, AI also faced many challenges and limitations, such as the difficulty of scaling up and generalizing the solutions, the lack of common sense and contextual understanding, the brittleness and unreliability of the systems, and the ethical and social implications of the technology. 

These factors led to a period of reduced interest and funding for AI, known as the "AI winter", which lasted from the late 1970s to the late 1980s.

The resurgence of AI

The 1990s and 2000s saw a revival and resurgence of AI, thanks to several factors, such as the availability of large amounts of data, the increase of computational power and storage, the development of new algorithms and methods, such as neural networks and deep learning, and the emergence of new domains and applications, such as the internet, social media, gaming, and e-commerce. 

AI also became more accessible and ubiquitous, as it was integrated into various products and services, such as search engines, digital assistants, recommendation systems, facial recognition, and self-driving cars. 

AI also achieved remarkable feats, such as defeating human champions in chess, Jeopardy, and Go, generating realistic images and videos, and creating original music and art.

Machine learning is a subset of artificial intelligence that involves the development of algorithms that can learn from and make predictions or decisions based on data. It enables computers to improve their performance on a specific task over time without being explicitly programmed. 

Machine learning has been around since the 1950s, but it has gained significant attention in recent years due to the availability of large amounts of data and increased computational power.

Deep learning, a branch of machine learning, uses multi-layered neural networks to learn and make decisions. These layers can learn data features and representations at different levels of abstraction, allowing deep learning models to handle complex tasks such as image and speech recognition. 

Deep learning originated in the 1940s, but it has become more prominent in the 21st century due to access to large amounts of data and improved computational power.

Machine learning and deep learning are useful because they allow computers to learn from data and generate predictions or outcomes, which can be applied to a large variety of issues in different fields.

The rise of generative AI

Born from the audacious dreams of science fiction and the relentless curiosity of mankind, generative AI has waltzed onto the stage with all the subtlety of a sledgehammer in a china shop.

Generative AI, sometimes referred to as GenAI, has been conceptualized for decades but has only recently been technically possible. It's worth noting that Generative AI is different from the theoretical AGI (artificial general intelligence), which aims to replicate human-level general intelligence in machines.

Generative AI models are trained on input data and then create new data that resembles it. Transformer-based deep neural networks improved and led to a surge of generative AI systems in the early 2020s. They became very popular in recent years after DALL-E and OpenAI’s ChatGPT were released in 2020 and 2022 respectively.

Generative AI has become very common in every area of work and personal life. There are many benefits to the development of “large language models” (LLMs like ChatGPT) and other GenAI models.

GenAI has increased the availability of advanced AI tools in 2023 and 2024. However, there are also some criticisms, from safety and privacy to ethics and bias, to commercial structures and paywalls that restrict access to the best GenAI models, possibly creating a bigger social and knowledge gap between the “haves” and the “have-nots”, and potentially accelerating societal inequalities.

The chronology of AI summarised

Here are some key milestones in the development of AI:

  • Greek myths (antiquity): Greek myths of Hephaestus and Pygmalion incorporated the idea of intelligent automata (such as Talos) and artificial beings (such as Galatea and Pandora).
  • 1940s - 1950s: the ‘birth’ of artificial intelligence.
  • 1943: During World War II, Alan Turing and neurologist Grey Walter were among the bright minds who tackled the challenges of intelligent machines.
  • 1950: Alan Turing published his paper, “Computing Machinery and Intelligence,” introducing the Turing Test.
  • 1950: Isaac Asimov, a science fiction writer, picked up the idea of machine intelligence and imagined its future. He is best known for the Three Laws of Robotics.
  • 1956: The term 'artificial intelligence' is coined at the Dartmouth Conference.
  • 1965: Developed by Joseph Weizenbaum at MIT, ELIZA is an early example of a natural language processing program.
  • 1974: The U.S. and British governments stopped funding undirected research into artificial intelligence.
  • 1980s: AI winter due to withdrawal of funding.
  • 2010s: AI boom due to successful application of machine learning in academia and industry.
  • 2017: Google invents the ‘Transformer’ – a modern AI architecture that will become the underpinning of Generative AI.
  • 2022: Open AI takes the world by storm with its viral introduction of ChatGPT – the now ubiquitous large language model.
  • 2023: EU AI Act signed into European Law; other countries developing AI regulation are yet to be fully adopted into law.
  • 2023-2024: The Generative AI race heats up, with several key players like Microsoft, OpenAI, Google, Anthropic, and Meta releasing and iteratively improving versions of their Generative AI models like Copilot, Gemini, GPT-4V and GPT-4 Turbo, Claude, and open source foundation models like the U.A.E’s Falcon 40-B and Meta’s Llama.
  • 2023-2024: Microsoft brings generative AI to the world of work with Copilot fully integrated into Office 365 apps like Word, Excel, MS Teams, PowerPoint, etc.
  • 2023-2024: Pair Programming assistants like Copilot X & Github Copilot, Databricks Assistant & Amazon Bedrock are revolutionizing software development.
  • 2024: Generative AI-powered robots are showing huge promise in general-purpose tasks. Companies like Tesla and BMW are introducing Generative AI-powered humanoid robots in their factories.
  • 2024: Small language models (SLMs) like Phi-2 and ‘on-device’ AI are being introduced by the likes of Microsoft and Samsung as processing power and AI models improve.
  • 2025 - ?: Humanoid robots in homes? General Purpose AI Assistants replace mobile and PC operating systems? Artificial General Intelligence (AGI)?

Asimov’s Three Laws of Robotics

Asimov's Three Laws of Robotics are the bread and butter of any self-respecting AI enthusiast, and they've been causing delightful headaches for roboticists and ethicists alike since their inception.

First Law: A robot may not (willingly) injure a human being or, through inaction, allow a human being to come to harm. This is the robotic equivalent of the Hippocratic Oath.

Second Law: A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law. This is the "I, Robot, am your humble servant," law, ensuring that robots are here to serve us, not the other way around.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. This is the "self-preservation" law, ensuring that robots can't be casually destroyed by their human overlords.

Asimov later added a Zeroth Law, that superseded the others: 

"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

This is the "greater good" law, ensuring that robots consider the welfare of humanity as a whole.

Asimov's Three Laws of Robotics have had a profound influence on AI research, shaping the way we think about machine behavior and ethics.

Ethical framework 

The laws have grown from a thought experiment into an essential conceptual framework for real-world robotics and AI ethics. They've sparked countless discussions and debates, highlighting the importance of designing AI systems that respect and protect human life. 

There are, however, interpretations of this, e.g. AI that works ‘for the greater long-term good’ (despite the costs today) that are frowned upon in the AI community as distracting from current AI harms and risks.

Human safety

The First Law underscores the importance of ensuring that AI systems do not cause harm to humans. This has led to the development of AI technologies prioritizing human safety, security, and dignity.

Human-AI collaboration

The Second Law emphasizes the need for AI to obey human instructions, placing humans in control. This has highlighted the importance of developing AI systems that augment human capabilities, foster collaboration, and empower individuals rather than replace them.

Ethical system behavior

The Third Law calls for AI systems to protect their own existence as long as it aligns with the first two laws. 

Asimov’s third ‘self-preservation’ law is mostly not a ‘baked-in’ function built into any artificially intelligent systems currently, as it is the most challenging to ethically interpret and implement.

Influence on public perception

Asimov's laws have also influenced public perception of AI, becoming a cultural touchstone in understanding artificial intelligence.

These laws have shaped practical discussions on AI ethics, but the fast progress of AI technology requires more investigation into these fundamental rules. As AI improves, ethical issues have become more important, resulting in the creation of ethical standards, AI ethics boards, and research centers focused on AI ethics.

AI risks, ethics, bias, and responsible AI development

As AI becomes more integrated into our daily lives, it's important to consider the potential risks and challenges it presents. Here are some of the key risks associated with AI:

Lack of AI transparency and explainability

AI and deep learning models can be difficult to understand, even for those who work directly with the technology. This leads to a lack of transparency for how and why AI comes to its conclusions.

Job losses due to automation 

AI-powered job automation is a pressing concern as the technology is adopted in industries like marketing, manufacturing, and healthcare. By 2030, tasks that account for up to 30 percent of hours currently being worked in the U.S. economy could be automated.

Social manipulation through AI algorithms

AI algorithms can be used to manipulate social and political discourse, spread misinformation, and influence public opinion.

Privacy violations

AI systems often require substantial amounts of data, which can lead to privacy concerns.

Algorithmic bias 

AI systems can perpetuate and amplify existing biases if they're trained on biased data and not trained with the right methodology to spot and mitigate the biases in our data collection methods, the data itself, training and interpretation, or the use of the AI outputs.

GenAI hallucinations 

Hallucinations in AI are nonsensical, or non-factual outputs from generative AI models. They’re sometimes very plausible (in the case of text generation) and can lead to the spread of misinformation, amongst other challenges, like the erosion of trust.

Overreliance on AI and automation

As humans use GenAI tools more, linked to the fact that these models occasionally ‘hallucinate’, we’re prone to overreliance on AI tools that mostly ‘sound right’ or mostly operate right. This can lead to several real risks from the banal – made up information in a court case, to the really serious and fatal – self-driving cars running over pedestrians.

Accelerated proliferation of cyberattacks and other security issues

The recent pieces of advice in GenAI come with specific issues related to online security. LLMs and similar foundation models present new attack vectors that bad actors can capitalize on, as well as a new more ‘efficient’ way for bad actors to probe networks, software, etc. for weaknesses and capitalize on exploits much quicker and at scale.

This is a non-exhaustive list. There are a lot of other AI risks to consider like deepfakes in video and photo imagery, voice cloning, identity theft, and fraud. There have already been two very high-profile cases in the media in 2024.

Addressing these challenges requires a concerted effort to develop AI responsibly, emphasizing ethical considerations, transparency, and inclusivity in AI development and deployment. 

Initiatives like the Ethics Guidelines for Trustworthy AI by the European Commission and the AI principles outlined by leading AI organizations reflect a growing commitment to responsible AI.

As we progress into a more AI-enabled future, it becomes more and more important that we adopt Responsible AI practices to ensure we use these AI systems in a transparent and fair manner, and that we have taken adequate steps to mitigate bias, harm, and other risks to humanity.


Want access to exclusive articles, hundreds of hours of event presentations from AI top leaders, and dedicated templates? Sign up for membership today!

signup
The future of machine intelligence