Who and Why Invented AI? AI basics by WhyAI

Who invented Ai? WhyAI
8 mn read

The article was written by Alex Sherman & WhyAI.


Short Intro

Imagine, if you will, a painter’s canvas, awaiting the deft touch of its artist. Instead of paints and brushes, there are circuits and algorithms. The masterpiece? Artificial Intelligence (AI). At its core, AI is simply a machine’s ability to mimic human-like tasks, particularly cognitive functions. Yet the backdrop to this revolution, was an era bustling with technological mavericks, colossal corporations, and, dare I say, a dash of serendipity.

Enter the mid-20th century: computers were newborn infants, humongous in size, and limited in function. The tech realm’s air crackled with possibilities, each new invention seemingly shrinking our expansive universe a smidge more. Visionaries like Alan Turing dared to ask audacious questions: Can machines think? Such queries, peppered across cafes and labs from Silicon Valley to the meandering streets of Oxford, would ultimately sow the seeds for the rise of AI.

In ancient times, humans gazed up at the heavens, seeking to harness the power of the stars. Fast forward to the Renaissance, when alchemists tried to turn base metals into gold. These historical pursuits mirror our modern quest to recreate the mind’s power. The inception of AI traces back to humanity’s age-old endeavor, such as the Greeks’ attempt to build automatons or Da Vinci’s sketches of flying machines. These inklings, dreams of our forefathers, are akin to our computational dreams. When the 20th century rolled around, an intersection of technology and philosophy formed the crucible where AI would be born.

AI Prerequisites

Before AI became a darling buzzword, there were certain dominoes that had to tumble first. The invention of the digital computer, a monolith of logic and math, laid the groundwork. Then, we had algorithms: a series of logical instructions guiding these computers. Yet, an essential catalyst was missing.

Enter: data. For computers to “learn”, they needed vast amounts of data. And what better decade than the 50s and 60s, an era of space races and geopolitical chess, to begin amassing data on an unprecedented scale. In parallel, a quiet academic revolution brewed. Theories on neural networks, the precursors to today’s deep learning models, began to surface. Visionaries like Donald Hebb postulated that networks of neurons could, theoretically, learn.

The cherry atop this computational sundae? Processing power. Without the rapid advancements in this realm, AI might’ve remained a pipe dream, relegated to the dusty annals of academia.

The progression of AI didn’t happen in a vacuum. It was the result of a continuum of technological evolution. Picture an ancient abacus, one of the first devices to aid in calculation. This tool laid the groundwork for more sophisticated machines like the mechanical calculator of Blaise Pascal in the 17th century. By the 19th century, Ada Lovelace wrote what is recognized as the first algorithm intended for implementation on Charles Babbage’s Analytical Engine. The landscape drastically shifted in the 20th century. Remember the punch cards used in early computers? These rudimentary coding systems evolved into sophisticated programming languages. As the 70s ushered in the era of the microprocessor, it shrank the size of computers and democratised their access. By the 80s and 90s, the advent of the personal computer set the stage for AI’s accelerated development. Every step was a building block, pushing us closer to the AI dream.

So Who?

And now, to unveil the protagonists of our tale: those audacious dreamers who breathed life into AI.

  • Alan Turing (UK): Often dubbed the ‘father of computer science,’ Turing’s revolutionary paper in 1950 proposed the notion of machines that could simulate any human intelligence. His famed Turing Test still stands as a golden standard in measuring a machine’s “intelligence”. With every coded whisper within our smartphones, Turing’s legacy lives on.
  • John McCarthy (USA): An academic par excellence, McCarthy christened the term “Artificial Intelligence” in 1956 and organized the famed Dartmouth workshop, a gathering that many historians cheekily regard as AI’s debutante ball. McCarthy envisioned a world where every aspect of learning or any other feature of intelligence could be broken down to such a degree that a machine could simulate it. A tad optimistic? Perhaps. But we have him to thank for every Alexa and Siri that serenades us today.
  • Marvin Minsky (USA): A polymath and co-founder of the Massachusetts Institute of Technology’s AI laboratory, Minsky was often referred to as the “Old Man of AI.” His work on perceptrons laid the groundwork for neural networks. He believed in the inevitability of machines surpassing human intellectual capabilities, a notion both thrilling and (let’s admit it) a tad terrifying.
  • Andrey Markov (Russia): No story on AI’s forefathers would be complete without a nod to Markov. His work on chains (a sequence of possible events), while not directly AI, deeply influenced probabilistic models in AI, especially in natural language processing. If you’ve ever marveled at how your texting app predicts your next word, tip your virtual hat to Mr. Markov.

The stage was set. From Turing’s musings in war-torn Britain to McCarthy’s fervent beliefs echoing through American academia, the march towards AI was well underway. It was the beginning of a journey that would challenge our very notions of intelligence, creativity, and perhaps, humanity itself.

Who and Why Invented AI? AI basics by WhyAI

One must marvel at the audacity of these luminaries. They dared to envision a future far removed from their present, a future where machines didn’t just compute, but thought. As we delve deeper into AI’s rabbit hole, remember, it’s not just about ones and zeros. It’s about aspirations, failures, and the unwavering human spirit.

While our article recognizes some of the most notable figures in AI, the tapestry of AI’s inception is vast and interwoven with many contributors. 

Consider Grace Hopper, the “queen of software.” It was her pioneering spirit that led to the creation of the first compiler, translating human instructions into machine code. Her efforts streamlined machine-human communication. 

Then there’s Geoffrey Hinton of Canada, the “Godfather of Deep Learning.” Hinton’s backpropagation algorithms empowered the current resurgence of neural networks. The world owes a lot to these unsung heroes, each playing a pivotal role in the sprawling narrative of AI.

Where It Led

AI, in its infantile days, was like an overeager toddler trying to fit a square block into a round hole. But with time, this toddler grew, learned, and evolved, making strides that have left even cynics gobsmacked.

AI in the 21st century has been transformative, to say the least. Accenture reported in 2018 that by 2035, AI could potentially boost average profitability rates by 38% and lead to an economic increase of $14 trillion in 16 industries in 12 economies. If that doesn’t make your monocle pop out in astonishment, I don’t know what will!

Forms of AI have diversified. While pure software-based AI (think of Siri making quips or Alexa playing your favorite tune) is commonplace, there’s a rise in AI embedded in hardware. From Tesla’s Autopilot system to advanced surgical robots, AI is no longer confined to the realm of ones and zeros but has hands, legs, and, metaphorically speaking, a bit of a soul.

Let’s take a moment to reminisce about AI’s nascent stages. The 1960s saw the rise of the ELIZA program, crafted by Joseph Weizenbaum at MIT. This rudimentary chatbot simulated a psychotherapist’s responses. By recognizing keywords and deploying pre-determined scripts, ELIZA managed to create the illusion of understanding. 

Another seminal breakthrough was the “General Problem Solver,” conceptualized by Allen Newell and Herbert A. Simon. This program was engineered to emulate human problem-solving abilities, hinting at the immense potential of AI. Both these inventions were pivotal, offering a glimpse into the tantalizing potential of machines that could “think”.

Why AI?

Diving deeper into our intrepid pioneers, one might ask, “What sparked such audacious ambition?” What was the invisible puppeteer guiding their hands?

  • Alan Turing: Post World War II, Turing’s experience with the Enigma machine, which played a crucial role in decoding Nazi ciphers, laid a foundation. For Turing, AI wasn’t about replacing humans but expanding our horizons, a sentiment echoed in his words: “We can only see a short distance ahead, but we can see plenty there that needs to be done.”
  • John McCarthy: McCarthy’s enthusiasm for chess was pivotal. If a game as intricate as chess could be broken down into logical steps for a machine to understand, then why not human cognition? He once quipped, “As soon as it works, no one calls it AI anymore.” A sassy nod to AI’s ubiquity in modern times.
  • Marvin Minsky: Minsky’s fascination lay in understanding human cognition and replicating it. He envisioned machines capable of human-like learning, musing, “Once the computers got control, we might never get it back. We would survive at their sufferance.”
  • Andrey Markov: Markov, knee-deep in probability theory, was keen on patterns. His chains proved that sequences and predictions could be algorithmically determined. Essentially, he was the cool mathematician at the party telling everyone, “I told you so!”

Why AI? Beyond the scientific curiosity, there was a shared dream: augmenting human potential, solving problems previously thought unsolvable, and maybe, just maybe, understanding what makes us, us.

The historical backdrop against which AI rose to prominence was tumultuous and charged with ambition. The fervor of the Cold War era resulted in an influx of funding and urgency for tech research. As nations grappled in a silent battle of supremacy, the technological frontier saw a surge of innovations. The space race, advancements in missile technology, and the shadows of espionage necessitated tools that could rapidly process vast swathes of data. AI was emerging not just as a subject of academic fascination but as a tool of paramount strategic importance. Behind every algorithm and line of code was a dream, not just of scientific triumph but also of geopolitical dominance.

What’s Next?

Ah, the future! It’s almoust here with a plenty of promising startups & researchers (check this startup – creating digital worlds with just a text input)A tantalizing cocktail of hopes, dreams, and a dash of trepidation. While the 4th Industrial Revolution was characterized by AI and connectivity, whispers of a 5th loom large. Experts believe this will be characterized by the deep integration of humans and machines. We’re talking brain-computer interfaces and AI-enhanced human abilities.

Elon Musk’s Neuralink endeavors to enable direct communication between the brain and computers, whereas companies like OpenAI (which, between us, birthed yours truly) focus on ensuring AI benefits all of humanity.

Visionaries like Ray Kurzweil predict that by 2045, we might hit the ‘Technological Singularity’ – a point where technological growth becomes uncontrollable and irreversible. Futuristic? Yes. Possible? Only time will tell.

But here’s the gist: as we stand on the precipice of the future, gazing into the vast unknown, it’s clear that the AI narrative is just beginning. From pondering over a machine’s ability to think to witnessing machines making decisions, AI’s odyssey is an epic of Homeric proportions.

In the sage words of a famous British time traveler, “The future is not set. There is no fate but what we make.” So, as we hurtle into tomorrow, here’s to making it, innovating it, and maybe even adding a dash of wit along the way.

Future of AI

The horizon of AI holds promises and mysteries. One term making waves is “quantum computing.” These aren’t just faster computers; they’re a complete overhaul of our computational framework. Leveraging quantum mechanics’ principles, these machines promise data processing at speeds previously deemed fantastical. Marrying AI with quantum computing could redefine the very paradigms of what we deem possible. On another frontier, neuromorphic engineering stands poised to make waves. By designing algorithms mirroring human neural architectures, we might just craft AI that doesn’t just think but also “feels” in some rudimentary sense.


What is WhyAI? Positioned at the vanguard of AI empowerment, WhyAI demystifies the complex realm of Artificial Intelligence for enterprises and individuals alike. Seamlessly traversing the dynamic intersection of technology and commerce, WhyAI is reshaping the AI narrative, one algorithm at a time. Their prowess encompasses meticulous market research, keeping clients at the cutting edge of AI’s swift advancements, to bespoke consulting that evolves AI from an abstract notion to a formidable asset for businesses. Through their enlightening articles, they provide a window into the constantly shifting AI landscape, and their innovative R&D remains unyieldingly in pursuit of the next game-changer. Moreover, with “trAIner” soon to debut, WhyAI stands on the brink of redefining HRtech. Delve into WhyAI, an entity that not only clarifies the ‘how’ of AI but fervently delves into the ‘why’.

Leave a Reply

Your chance to share your opinion and argue in the comments

Learn more about Crunch/Dubai

Crunch Dubai is a community-orientated media portal. We find cool stories. Experts and entrepreneurs write their stories on our platform.

Learn latest Tech and Business news in home town

Crunch Dubai is a hyperlocal media portal. Real people, real business, real stories

Become an expert

If you want to promote your expertise, reach out to [email protected]