A New Year, A New Question About AI and Jobs
In Kenya, a teenager learning to code sits beside an AI tutor that never tires.
In Tennessee, a factory worker retrains in weeks, not months.
And in Papua New Guinea, young conservationists use AI to write proposals and analyze field data that once required years of training.
These aren’t predictions—they’re the first visible signs of a global transformation already underway. AI is no longer futuristic—it’s already reshaping how people learn and how they work.
At the same time, recent research from the McKinsey Global Institute warns of what could be the biggest workforce transition since the Industrial Revolution. Their models show that many tasks people do today could, in theory, be automated—numbers that fuel fear about AI job loss, inequality and purpose.
But “technical potential” is not destiny. The real test for 2026 is what we choose to build with it.
To explore that, I drew on Google’s paper AI and the Future of Learning and conversations with Ambassador Shea Gopaul (UN/IOE); Ben Gomes, Google’s Chief Technologist for Learning & Sustainability; Lila Ibrahim, Chief Operating Officer at Google DeepMind; and UN Young Leaders Enzo Romero (LAT Bionics) and Dikatauna Kwa (Eda Davara Marine Sanctuary). Together, they sketch a roadmap that is neither utopian nor apocalyptic—just practical.
1. Start With the Reality Check
In my new video interview with Ambassador Shea Gopaul, she insists we start with the ground we’re standing on. Roughly 2.6 billion people—about one-third of the world—still lack internet access. In many least developed countries, people don’t have reliable electricity, let alone computers or the skills AI assumes.
Women and girls face an extra gap: less education, fewer STEM pathways, more precarious work. Around 60% of the world’s workers are in informal jobs—and in sub-Saharan Africa, it’s closer to 90%—often without contracts, social protection or structured training.
On climate, she notes that fewer than 40% of countries even mention skills in their national plans. “Fail to plan,” she warns, “and you’re not going to deliver.”
Her message is simple: without serious investment in skills, the future of work will only grow more unequal. For AI to help rather than harm, it must do two things at once: boost productivity where tools already exist—and close the access gaps where they don’t.
2. From Learning to Earning—Faster
In their paper AI and the Future of Learning, Google’s James Manyika, Ben Gomes, Lila Ibrahim, Yossi Matias, and Christopher Phillips argue that AI can shrink the distance between understanding something and applying it—if it is built on learning science, not just raw output.
Gomes sees that on the ground. No-cost tools in Google Classroom, Gemini and NotebookLM act as teaching assistants, helping educators draft lesson plans, create assessments and tailor activities. In a Northern Ireland pilot, teachers reported saving up to 10 hours a week—time they could reinvest in students.
In a Miami AP Statistics class, past test questions were loaded into NotebookLM and students used Gemini’s guided learning to explore why they missed particular items, while the AI handled explanations and follow-ups and the teacher moved around the room coaching.
One of the biggest myths, Gomes says, is that AI inevitably makes students intellectually lazy. With the right design, it can actually reduce what he calls “metacognitive laziness” by stripping away confusing, poorly tailored content so learners can focus on deep thinking instead of guesswork.
On the jobs side, he points to Google Skills and career certificates, where millions have completed AI and cloud courses linked to hiring processes. In some regions, candidates who finish Google Cloud certificates complete hands-on labs that simulate job scenarios; those labs now act as the first stage of recruitment, giving employers a clearer picture of skills and job-readiness.
The pattern is emerging: AI in education and training helps when it saves time, deepens understanding and connects visibly to employability and AI skills—not just test scores.
3. Rethinking Apprenticeships for the AI Age
For Gopaul, who founded GAN Global to expand earn-and-learn programs, any talk of “AI jobs” has to pass a simple test: does the training lead to real work?
Traditional two- or three-year apprenticeships are giving way to shorter, targeted models—some as brief as three to six months—focused on concrete digital, care and green-economy skills. Stackable micro-credentials build toward larger qualifications, and hybrid formats mix online content with on-the-job mentoring.
Small and mid-sized businesses, which employ most people worldwide, can’t do this alone. Gopaul highlights pooled apprenticeships where firms in the same sector share training, and multinationals that train more apprentices than they need, then “seed” those skilled workers into their supply chains.
She is clear about what success looks like:
“Scale is when you don’t just train people—you get the right skills and people are actually hired.”
AI can help match candidates to apprenticeships, forecast skill demand and deliver flexible, work-based learning at lower cost. But the future of work will still hinge on whether education, employers and policymakers build pathways that end in decent jobs, not just digital badges.
4. Building AI for Learning That People Can Trust
If AI-enabled pathways are the front door to new jobs, models like LearnLM are the engine underneath. Lila Ibrahim, COO at Google DeepMind, says the company sees education as a “root node” problem—like protein folding—where solving the core challenge unlocks many downstream benefits.
When DeepMind began building LearnLM in 2022, they didn’t just scale up a generic model. They formed a cross-disciplinary team of AI researchers, neuroscientists, cognitive scientists and educators to design it more like a tutor than a search engine. “We didn’t want to just build a smart model and hope it worked for learners and educators,” Ibrahim explains.
Instead of simply giving answers, LearnLM is trained to guide students step by step, ask follow-up questions and support “productive struggle”—enough challenge to build understanding, not enough to cause shutdown.
The team has tested LearnLM in real educational environments—from Northern Ireland’s C2k program to pilots in Estonia and universities using Gemini Pro—continually evaluating not just correctness, but pedagogical quality and safety. LearnLM is now infused into Gemini, so those learning behaviors show up in tools people already use.
For schools and families, the message is simple: not all AI in education is created equal. Before adopting tools, they should ask whether the AI was built and tested with educators and learning scientists, whether it explains and coaches or just hands over answers, and how it is monitored for safety, bias and real-world learning impact.
5. What Young Innovators See That We Don’t
The UN Youth Office’s Young Leaders for the SDGs bring voices into the AI and work debate that are usually missing—especially from the Global South.
Enzo Romero – founder & CEO, LAT Bionics; UN Young Leader for the SDGs (Peru)
At LAT Bionics in Peru, Enzo’s team builds affordable, AI-driven, 3D-printed prosthetic arms. He sees AI helping young engineers and designers speed up their learning in robotics, biomechanics and digital manufacturing.
“Tasks that once required years of specialized tutoring,” he notes, “are now within reach for students who are just beginning their careers.”
But training programs for rural communities are often designed in big cities and don’t match local realities. With the right AI tools, he believes young people in smaller towns could learn high-value skills—including programming—without relocating or waiting for new courses.
His ask to leaders is concrete: modernize how governments adopt new technologies and invest in AI infrastructure and reliable internet in rural areas, so innovation in the Global South isn’t permanently stuck in pilot mode.
Dikatauna Kwa – founder, Eda Davara Marine Sanctuary; UN Young Leader for the SDGs (Papua New Guinea)
In Papua New Guinea, Dikatauna Kwa runs the Eda Davara Marine Sanctuary, a youth-led hub for conservation and community education. She sees AI helping young people draft proposals, analyze environmental data and communicate their projects more clearly.
“Even simple tools like AI-assisted writing or research,” she says, “are giving youth the confidence to participate in programs, apply for opportunities, and tell their stories more effectively.”
A major barrier she sees is the lack of strong training in research, monitoring and environmental literacy—skills that matter for both green jobs and local livelihoods. AI, she argues, can act as an on-demand tutor, helping youth practice real-world tasks like data interpretation and project planning even outside a formal classroom.
Her plea to global leaders: invest in localized AI tools built for young people in countries like Papua New Guinea—systems that understand local languages, ecosystems and priorities so AI supports grassroots innovation instead of reinforcing existing divides.
Their shared message to older decision-makers: AI is already helping, in small but powerful ways—but only when access, context and opportunity line up.
6. Governance, Not Hype, Will Decide Who Wins
Beneath all the pilots and promises sits a harder question: who is accountable for keeping AI in education and work aligned with human rights and decent work?
Drawing on her work with the ILO and UN, Gopaul says every serious use of AI in the workplace needs four basics:
- Clear risk ownership – someone accountable from design to operation
- A practical governance blueprint – how the system is used, monitored and updated
- Simple, transparent communication – so workers and employers understand what’s happening
- Independent audits and KPIs – to track bias, fairness and employment outcomes over time
She’s adamant that SMEs, workers and vendors all need seats at the table—not just big tech and national governments. “You need that human-in-the-loop community,” she says, “or you won’t get trust.”
For policymakers and business leaders, that means AI for the future of work isn’t about “AI jobs” or flashy demos; it’s about funding, regulation and long-term partnerships that make transitions real for people with the least power.
7. A Roadmap for 2026: Turning Fear Into Agency
Taken together, these voices offer a practical roadmap for the new year.
For schools and universities
- Use AI first to save teachers time and improve feedback, not replace teaching.
- Choose tools that guide thinking and explanation, not just generate answers.
- Track impact on equity—who benefits, and who is left out.
For employers
- Shift toward skills-based hiring, using AI simulations tied to real tasks.
- Offer short, stackable earn-and-learn programs for youth and mid-career workers.
- Support SMEs through shared training initiatives and supply-chain skilling.
For policymakers
- Treat connectivity, devices and AI literacy as core work infrastructure.
- Embed reskilling and apprenticeships in climate, digital and industrial strategies.
- Require governance, audits and worker voice for AI in employment systems.
For youth and families
- Use AI to build projects and portfolios employers can see, not just polished homework.
- Seek trusted online credentials linked to real jobs, not generic badges.
- Use AI as a coach and amplifier—for learning, applications and storytelling.
AI will transform the future of work—that much is certain. Whether it drives decent work and shared prosperity or deeper insecurity depends on what we do now: how we rebuild education, business and policy so that, beginning this year, many more people share in the gains



















