Using AI ≠ AI Native: What It Really Means to Transform How Teams Work
The difference between using AI as a tool you pick up occasionally versus treating AI as infrastructure you don't know how to work without. A few days ago, I posted that our team is going AI Native. A friend asked: if I use ChatGPT for copywriting and Copilot for code completion, does that count? No. That’s using AI. It’s not being AI Native. The difference? One treats AI as a tool you pick up occasionally. The other treats AI as infrastructure—you don’t know how to work without it. Like the difference between “knowing how to use a computer” and being a digital native. After thinking this through, I’ve identified three core characteristics. Every repetitive, process-driven, deterministic task gets handled by AI—fully or with assistance. What’s deterministic work? Tasks with clear inputs, clear rules, clear outputs: These tasks share one thing: they have standard answers, or at least standard processes. When humans do this, we’re executing algorithms. So why not let AI—which excels at algorithms—handle it? An algorithm (/ˈælɡərɪðəm/) is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation — Wikipedia My content workflow—research, writing, translation, social posts—involves AI at every step. I focus on “what to write” and “how to make it better,” not “how to turn ideas into text.” What’s the first app you open when you start your computer? My old answer: Emacs, VS Code, browser. (Yes, Emacs. Vim users, please hold your fire—that’s not today’s debate.) Now: Claude Desktop and Terminal (running Claude Code and Codex). I rarely open Emacs or VS Code anymore. Editors I used for over a decade, just… set aside. Not because they’re bad—my workflow changed. Coding went from “I write in an editor” to “I talk to AI, AI writes for me.” I spend more time describing intent, reviewing output, and giving feedback than typing line by line. (I wrote about this shift in a previous post on the “100x engineer” concept.) It’s not just coding. Research happens in ChatGPT’s Deep Research. Image generation in Midjourney or Nana Banana. Docs in Claude. Every work scenario has an AI app as its entry point. AI is no longer one tool in the toolbox. It’s the workbench itself. You’re not picking up a hammer occasionally—your entire work happens on this platform. Here’s something I realized recently: you can gauge whether a team is AI Native by their token consumption. Admittedly, this is a rough proxy. High consumption doesn’t always mean effective use—could just be inefficient prompt debugging. But assuming effective usage, it’s a hard metric. A few thousand tokens daily means traditional work patterns with occasional AI. Hundreds of thousands or millions? Work has been restructured around AI. Retool’s 2024 State of AI report backs this up: 64.4% of daily AI users report significant productivity gains, versus just 17% of weekly users. The relationship is non-linear—it’s not “use more, get slightly better.” Cross a threshold, and your entire way of working transforms. Here’s a wilder thought—pure speculation, might be wrong. We use GDP to measure national economic output. Could a future metric—national token consumption—measure a country’s intelligence utilization? GDP measures goods and services produced. Token consumption measures intelligence used. Sam Altman recently floated “intelligence too cheap to meter”—costs approaching zero, like the old nuclear vision of electricity too cheap to meter. If that happens, whoever better leverages near-free intelligence wins. Sure, implementing this raises questions: How do you measure? Who measures? How do you compare across countries using different tools? But as a thought experiment, it hints that competitive dimensions may shift. What’s the actual value of going AI Native? Unleashing creativity. Once AI handles deterministic work, human energy focuses on three things: Creativity: New ideas, new problems, new angles. AI still struggles here. Thinking: Judgments, decisions, trade-offs. AI provides options and analysis. Humans make the call. Collaboration: Two layers here. Human-to-human—with repetitive work offloaded, collaboration becomes purer, more about ideas than information transfer. Human-to-machine—a new form. You learn to express intent clearly, provide context effectively, evaluate and guide AI output. This skill matters increasingly. Now for the “how.” Transformation needs parallel progress on three levels. Teams need the best tools available. Not cheap, but worth it. Claude Pro, ChatGPT Plus, Cursor, specialized AI tools—get what’s needed. Without proper tools, transformation is just talk. Tool investment should match team size. The core isn’t spending—it’s building awareness. Require everyone to regularly audit their work: “What deterministic, repetitive tasks haven’t been automated yet?” That answer becomes a key metric. Not “how often did you use AI,” but “how many repetitive tasks did you eliminate.” People requirements change too. AI Native team members need two capabilities: Understanding AI fundamentals: You don’t need to train models, but understand how LLMs work, their boundaries, which tasks suit them. This enables effective use instead of blind trust or dismissal. Proficiency with AI tools: Prompt engineering basics, common tool operations, integrating AI into workflows. Hands-on capability. Both should be explicit job requirements, including in hiring. Interview questions: How do you use AI daily? What problems have you solved? What pitfalls have you hit? Candidates who can’t use AI will struggle on an AI Native team. Most important: culture. Build a shared understanding: automation is a source of pride; manual repetitive work calls for reflection. When someone does something repetitive manually, the reaction shouldn’t be “they’re hardworking”—it should be “why hasn’t this been automated?” This needs supporting metrics. Traditional evaluation looks at outputs—code written, features shipped, articles published. AI Native evaluation focuses on Builders created—what automation did you build? What repetitive work became one-time work? How many people reuse your tools? Shift evaluation from “final outputs” to “capability to produce outputs.” We used to say “don’t reinvent the wheel.” Now: “build machines that build wheels.” AI Native isn’t binary—it’s a spectrum. No one goes AI Native overnight. It takes continuous learning, experimentation, and workflow adjustment. Our team is still transforming, far from done. This post is less experience-sharing and more planting a flag. A year from now, I hope these aren’t just ideas—but practices that have taken root.
Three Characteristics of AI Native Teams
All Deterministic Work Is AI-Powered
AI Becomes the Work Entry Point
Token Consumption Becomes a Metric
The Real Value
How to Get There
Tools: Provide the Best, Set Automation Metrics
Capabilities: Make AI Skills a Job Requirement
Culture: Pride in Automation, Shame in Repetition
Closing