February 9, 2026

Choosing Depth in an Age of Speed

Brian Steele, Head of AI at Honor Education

Over the last year of watching our engineering team at Honor explore agentic AI tools, here's what I've learned:

AI doesn't inherently make people better or worse at their jobs– it's just a tool. But it is an extraordinarily powerful one. 

What Happens When We Outsource Our Thinking

AI can generate mostly-correct code at a speed that would have seemed absurd a few years ago. The same is true for essays, research, analysis, and nearly every type of knowledge work. That speed creates a temptation that's hard to resist: letting the tool do the thinking for you.

When software engineers skip the work of understanding the problem they're trying to solve—foregoing the architecture wrangling and system design—they lose the most important part of the process: the learning itself. They may end up with something that works, but they've outsourced the thinking and robbed themselves of the opportunity to grow. 

This goes beyond code, extending to writing, research, and all kinds of knowledge work. Today's models are good enough to pull it off, but it comes at a cost. It's tempting to think we're eliminating a problem when we skip that struggle, but I'd argue that the struggle is the learning. It's what turns knowledge into wisdom.

The Power of Teaching Mode

With that in mind, what are some ways to ensure that we’re both employing the efficiency of AI tools while leveraging it for depth? At Honor, our engineers have instead been putting models in teaching mode. We tell the model not to give away the answer, but to guide us toward it. We ask it to quiz us, to test our recall, and to help us build mental models rather than just generate output. 

The results for us have been positive: engineers are not only understanding the code more (good) but are also finding they really enjoy the process of learning (great!).

We're also constantly questioning our own process. Is the way we're using these tools making us more effective, or just faster? Are we still debating the tensions in our approach, still making sure we understand the problem before we reach for the model? The most important thinking often happens before the LLM is ever involved.

Integrating this approach into your workflow can be simple: after each session with the model, ask yourself whether you let it think for you or whether you used it to think more deeply. That honest check-in is where the real leverage is.

The Path We Take From Here

It's clear we've arrived at an important moment. There's a lot of anxiety in the atmosphere about what AI means for depth, for learning, for the kind of deep thinking that gives work its meaning. 

Some of that concern is warranted. But here's why I'm hopeful: we get to choose. Down one path, we use these models to produce "good enough" work. We skip the reading, the struggle, the deep understanding—and hope it doesn't catch up with us. Down the other, we learn to use the tool not to replace our thinking but to sharpen it. To go deeper. To attempt more difficult things. 

It's never been easier to outsource our thinking. But it's also never been a better time for learning. The choice lies with us.

Want to try Honor yourself?

Explore how Honor would benefit your organization with a custom demo.

Keep reading: