In 2017, a team at Google published a paper titled “Attention is All You Need.” Nobody knew then that this paper would alter the course of human history.
Today, in 2025, I sit in front of my screen watching models correct their own mistakes, write complex code, conduct research, negotiate, and design. The line between “narrow AI” and “artificial general intelligence” has become blurry — far faster than anyone expected.
This article is my personal reflection — not an academic report, not desperate prophecies. It’s what runs through my mind as an engineer and a human being living through this moment.
What exactly is AGI?
Artificial General Intelligence is a system capable of performing any intellectual task a human can perform — and learning new tasks it hasn’t been trained on.
This differs fundamentally from Narrow AI (ANI) which excels in one specific domain: GPT writes but can’t drive a car, AlphaGo plays Go but doesn’t understand ordinary conversation.
True AGI learns, adapts, generalizes. Like an intelligent human who can learn anything given time and information.
How did we get here? The real timeline
2017 — The beginning: Transformer
“Attention is All You Need” introduced the Transformer architecture. The central idea: instead of processing information linearly, the model can “attend” to different parts of the context simultaneously.
This seemed technical. But it changed everything.
2020 — GPT-3: The turning point
GPT-3 from OpenAI was a shock. A 175 billion parameter model writing text that was often indistinguishable from human writing. Everyone said: “just statistics, no real understanding.” Perhaps. But the practical applications were very real.
2022 — ChatGPT: The popular explosion
When ChatGPT launched in November 2022, it reached one million users in five days. No technology in history had achieved that. Not because it was the smartest — but because it was the easiest to use. The conversational interface democratized access to AI.
2024 — Reasoning models
GPT-o1 then o3. Claude 3.5 Sonnet. These models don’t just answer — they think. They review. They verify. They correct their own errors.
When o3 was tested on math, programming, and science benchmarks, it outperformed the vast majority of humans. Not through memorization — through reasoning.
2025 — Autonomous Agents
The current qualitative leap: Agents. Models that don’t just answer a question, but complete entire tasks:
- Write code, run it, fix its errors, and deploy it to production
- Search the internet, aggregate information, and write a report
- Manage a project: tasks, deadlines, communication
The pace exceeds expectations — with real numbers
This isn’t a feeling. There are objective metrics:
MMLU (Massive Multitask Language Understanding): A test with 57,000 questions across 57 fields. In 2020: the best models scored 43%. In 2024: they exceed 90%.
HumanEval (code writing): In 2021: GPT-4 solved 67% of problems. In 2024: o3 solves 96%.
SWE-bench (fixing real GitHub bugs): In 2023: 1.96%. In 2024: 49%. A 25-fold increase in one year.
The trajectory isn’t a gradual curve. It’s closer to something approaching vertical.
Three things that concern me
I won’t be naive and say all this development is good without concerns. These concerns are real:
1 — Concentration of power
The few companies that own these models now possess enormous power. OpenAI, Google, Anthropic, Meta. All Western, and mostly American. What does this mean for the Arab world, which consumes this technology but doesn’t produce it?
2 — Employment
Not all jobs are equally at risk. Repetitive knowledge work — writing specific reports, translation, customer service, data entry — will be severely affected.
But even jobs that seemed safe have begun to shake. Programming, law, medicine, even art.
3 — Disinformation
Models capable of generating convincing, personalized content without human effort. This is the perfect weapon for disinformation at a scale we’ve never known before.
Three things that excite me
But the picture isn’t entirely dark:
1 — Democratization of knowledge
A young person in a remote village in Yemen or Indonesia can today access information and assistance that was once exclusive to the wealthy in major cities. An AI doctor guides, an AI lawyer explains, an AI teacher teaches.
This is not a substitute for human experts. But it’s far better than nothing.
2 — Scientific acceleration
DeepMind’s AlphaFold solved the protein folding problem that had stumped science for fifty years. Now similar models are accelerating drug discovery, climate understanding, and new materials.
Problems that would have taken a generation of scientists to solve will be solved within a few years.
3 — Individual productivity
Personally: I build today by myself what would have required a team five years ago. This new website was built with AI assistance at real depth. Not a replacement for thinking — an amplification of it.
My personal position
I don’t live in the binary of “AI will save the world” versus “AI will destroy it.” Reality is more complex.
What I know: change is coming. And the question is not whether it happens, but how we prepare for it.
For me as an engineer in Saudi Arabia:
I work on understanding these tools deeply. Not superficially. Understanding model architectures, how Agents work, where they fail and where they excel.
I think about value that machines cannot produce. Local context, human judgment, personal trust, lived experience.
I stay curious rather than afraid. Fear freezes. Curiosity moves.
A final word
When I wrote the first article on this blog in 2022, I was using Hugo and thinking about how to support Arabic. Today I’m building with Astro and talking about AGI.
Technology is accelerating. And the acceleration itself is accelerating.
What remains constant: curiosity, the desire to understand, and the belief that knowledge is worth the effort.
Seek knowledge and never grow weary.