An AI S-Curve
I find it fascinating to see how quickly, and in how many directions, practical AI ideas are multiplying right now. I installed OpenClaw recently and set up some daily prompts - show me interesting information, mine some RSS feeds I like - it has helped me learn new things each day… But I did not expect the breadth of different agent orchestration and LLM architectures it would present to me regularly.
This is some AI slop imagery if I’ve ever seen it. But - this article is not slop.
Two quick examples surfaced today:
- GeeksforGeeks explaining episodic memory for agents: Episodic Memory in AI Agents
- Reco laying out guardrail patterns for production AI stacks: Adding Guardrails for AI Agents: Policy and Configuration Guide
Neither is a formal paper. Both are from practitioners shipping systems right now. A year ago, this kind of content was mostly trapped in academic PDFs. Today it’s coming from product teams, founders, and individual engineers working in parallel. It feels like the technology crossed that invisible line from “novelty” to “industry-wide tinkering”. Invention is no longer trickling out of labs, and is now erupting from every corner.
The S Curve
That surge reminds me of the classic tech S curve: slow start, sudden acceleration, eventual plateau. Attributed to Richard Foster. Agentic AI seems to be hitting the inflection point: hundreds of experiments sprouting in every direction as people discover what’s possible. At the top of the S curve we’ll reach a point of diminishing returns, where the technology will be developed sufficiently that we’re deriving almost as much value from it as we can, or at least we’ll have reached a “local maxima” that will require a leap forward or sideways to get past.
Generated by AI because - fun and silly
Unfortunately there’s no good way to measure where we are on that S curve, it’s a descriptive model (performance vs. effort) more than a precise dashboard. Historically, analysts infer position after the fact by looking at metrics like:
- Performance improvements per unit cost or time (e.g., GPT quality vs. training runs),
- Adoption/market saturation (percentage of workflows automated by agents),
- Productivity gains from each new iteration (are we still getting 10× leaps or inching forward?).
We’ll only know which stage we were in once we look back. For now, the volume and diversity of attempts suggests to me that we’re early in the steep climb.
At the peak of the S curve there will be a huge swath of experiments behind us, and we will stand on the shoulders of those innovators. We’ll know which architectures, guardrails, and memory schemes actually mattered, and which ones were clever dead ends. From that vantage point, we’ll integrate the keepers, retire the clutter, and build new leaps atop the plateau, solving the problems of that moment. We will winnow the huge swath of innovation into the set of tools that are most important, and the overwhelming majority of innovation will be left behind as obsolete.
Some historical examples of the S curve in action:
- Electrification (late 1800s): hundreds of motor designs, distribution schemes, and safety standards emerged in a frenzy. Decades later, only a handful of configurations survived into modern grids.
- Early internet (1990s): countless protocols, browsers, and search engines bloomed. We remember a few (TCP/IP, HTTP, Google) because they scaled, the rest faded despite their ingenuity.
- Smartphones (2000s): quirky form factors and OS experiments everywhere. Eventually, capacitive touch slabs plus iOS/Android became the “winners”, but we learned from the detours (Palm, Symbian, BlackBerry).
Each wave had an era when builders couldn’t tell which ideas would stick or when the innovation would end. Only with hindsight could we map the S curve. If we could imagine ourselves in that future position, understand the problems we will have at that time, and experimentally find the filter that allows us to winnow the wheat from the chaff, we could pull the most important technologies forward for further integration.
Developing the Innovations that Stick
Understanding that future position would allow us to move technology development closer to application. But - to take a step back - I must clarify that trying to select only a few winning technologies during the rise of the S curve is a losing proposition. Uncertainty is all around, and in Antifragile Taleb explains why it is critical to make many small investments with high upside and low downside. That must be our strategy more broadly at this time… Still - it also makes sense to invest time in specific promising techniques that seem likely to pay off.
Thanks OpenClaw agent.
Attempting to understand the future position and problems is similar to the techniques of backcasting (popular in sustainability planning) and future-back strategy (from corporate roadmapping). The idea is to pick a future state (perhaps “agentic AI embedded safely in every workflow”), define the conditions there (only a handful of memory models survived, guardrails standardized, regulators codified rules), and then plan backwards to identify which experiments to double-down on now. It’s a complement to traditional forecasting:
- Forecasting: extrapolate from today forward (“if adoption grows 30% annually…”).
- Backcasting: anchor in a desired or expected future, then figure out the steps required to reach it (“what needs to be true for safe agents to be ubiquitous?”).
Silly silly ChatGPT.
Applying backcasting here might mean:
- Imagine your use-case.
- Sketch a mature AI stack for that case (memory needs, likely context requirements, interoperability standards).
- Identify which current experiments align with that vision.
- Allocate attention accordingly - “pull the wheat forward”, and let the rest compost into future insight.
Why This Matters
So why bother categorizing all this? Because being deliberate about when and how we adopt emerging mechanisms can prevent wheel-spinning. If, for example, we know we’ll eventually need:
- Shared memory schemas (to avoid bespoke retrievers for every agent),
- Guardrail layers that satisfy regulators and customers,
- Orchestration patterns that let humans stay in the loop,
… then we can prioritize experiments that point toward those outcomes.
The bazaar of ideas.
We can’t skip the messy middle. But we can observe it, understand it, envision the future, and make smarter bets about which inventions to pull forward. That’s the difference between getting swept up in this current Cambrian explosion, and guiding our investments (time, especially) to speed technological evolution.