Have you heard the story of the guy who wanted to learn how to ride a bike?
As a geeky child, he buried himself in textbooks and avoided physical activities. Later in life, he decided it was time to learn how to ride a bike. As usual, he read a book on the subject and watched several YouTube videos demonstrating riding techniques. He learned about balance, steering, pedaling, braking, posture, and momentum. He could explain the mechanics and the physics. He was prepared – or so he thought.
The rapid adoption of large language models (LLMs) is beginning to reshape how work is done across the economy. In software development, healthcare, law, finance, and customer service, AI systems are increasingly taking over tasks that once served as the natural entry point for human workers.
Major technology companies report that a significant share of code is now generated by AI tools such as GitHub Copilot and ChatGPT. Microsoft and Google report that 25-30% of new code at their companies is generated by AI.
At the same time, labor market data is sending mixed signals. While overall U.S. unemployment remains over 4 percent, according to the U.S. Bureau of Labor Statistics, more than 10 percent of young people seeking entry-level jobs are unable to find one.
The pattern is familiar. When automation arrives, it does not start by replacing experts. It replaces beginners.
And beginners are not just workers. They are future experts in training.
This dynamic is not confined to software. It is emerging across multiple knowledge professions.
- In law: contract review, legal research, discovery, and document drafting are increasingly handled by AI.
- In customer service: chatbots now resolve a large fraction of Tier-1 support requests without human involvement.
- In finance: AI supports fraud detection, financial modeling, compliance checks, and report generation.
- In marketing and media: copywriting, image generation, video editing, and campaign optimization are increasingly automated.
- In human resources: resume screening, interview scheduling, and candidate assessment are now partially AI-driven.
In each case, the first roles to shrink or disappear are the training-ground roles: junior analysts, paralegals, medical scribes, entry-level developers, and customer support representatives.
Despite remarkable progress, AI systems are not close to replacing human expertise in complex, high-stakes work.
In software alone, the majority of production-grade code, involving system architecture, security, reliability, edge cases, and multi-component integration, still depends heavily on experienced engineers. Humans are also required to train, fine-tune, evaluate, and supervise AI systems, and to define what correctness means in real-world contexts.
The same is true in healthcare, law, finance, and engineering. AI can generate drafts, suggestions, and analyses, but it still struggles with deep domain judgment, ethical tradeoffs, system-level reasoning, accountability for outcomes, and rare edge cases.
For the foreseeable future, we will continue to need human experts in all these fields.
Experts are not born. They are made.
Expertise is the product of practice, repetition, trial and error, debugging failures, seeing what breaks in real systems, and building mental models of how things actually behave.
Entry-level work has always been the crucible in which this transformation occurs.
This is the emerging risk: AI is not just replacing jobs. It is eliminating the path from novice to expert.
When AI handles drafting, coding, first-pass analysis, documentation, and basic decision-making, beginners lose the experiences that build judgment, intuition, systems thinking, error recognition, and a sense of responsibility for outcomes.
In effect, AI is denying humans the opportunity to be beginners.
This creates a paradox: we still need experts, yet we are dismantling the training process that produces them.
Corporations are making a seemingly rational short-term move: using AI to replace labor and justify massive investments in AI. But this creates a long-term strategic risk.
If current trends continue, several hard questions demand serious answers:
- How will work get done in the future if most people are locked out of meaningful skill development early in their careers?
- Without a pipeline of skilled human professionals, who will design the next generation of systems, debug catastrophic failures, supervise increasingly autonomous AI, define goals and constraints, and take responsibility when things go wrong?
- How will the workforce avoid cognitive atrophy as more thinking is offloaded to machines?
- What policies are called for at the corporate, government, and educational levels?
The biggest danger of AI isn’t mass unemployment; it’s the collapse of the learning systems that produce capable humans.
Returning to our biking story, our hero bought a bike, wheeled it outside, climbed on, cranked the pedals with confidence…
I’ll let you figure out what happened next.

4 Responses
Excellent article! The bike analogy is compelling.
Thanks, Bill. Actually, my older brother’s experience venturing into skiing for the first time inspired the bicycle story. He was a very cautious and analytical guy…
Gonzalo
Fewer Experts will be needed
The training period may be shortened
AI may help “get to the chase” as to what an expert needs to know and how to get there
Thank you for your comments, Lowell. I agree that AI can accelerate learning and help people reach competence faster. But history suggests that sweeping new technologies like electricity, the computer and now AI expand economic activity and create new roles rather than reducing the need for expertise.
When factories electrified, each machine required fewer workers, yet entire new industries and specialized professions emerged. When computing automated clerical work, it didn’t eliminate expertise, it produced software engineering, IT, data science, and digitally enabled fields across the economy.
At the same time, these technologies shortened early training but did not compress the path to mastery. Engineers, managers, and clinicians still developed judgment through years of real-world experience, not solely through instruction.
Healthcare is a good example: diagnostic tools, imaging, and clinical decision systems have made physicians more productive, but they have not reduced the need for expert clinicians. If anything, complexity increased, and experiential training remains essential.
That’s why eliminating entry-level work is risky. Those roles are the apprenticeship pipeline through which future experts are formed. If the pipeline breaks, we may end up with fewer true experts, even as AI makes basic skills easier to acquire.
Gonzalo