Issue 50 - Taste, Judgement, and Accountability: The Value You Bring
This edition breaks down how AI is impacting knowledge work, the value of human judgement, and how honing taste and judgement can futureproof careers.
This edition of Code, Content, and Career focuses on one issue only: the impact of AI on the future of knowledge work, and how we still factor in.
More and more knowledge workers are using AI in their workflows to get things done faster. For example, a seasoned software engineer can get a lot more work done with AI assistants and agents. Sometimes the adoption is by choice, but other times it's by mandate, and not always with great results.
AI tools look like a good opportunity (or at least a good scapegoat) for companies to trim down their workforce, which is often their biggest expense. Some larger enterprises have thousands of software engineers. To an executive team, even a small reduction can save a lot of money. Block let over 4000 people go, using AI as the reason.
AI is taking people's jobs. Or at least it's a convenient scapegoat. I don't believe that AI replaced those people, but I think those companies believed that some of the work those people did is now work that an AI can do well enough.
There's a lot to automate
One of the unfortunate realities in modern knowledge work is that there's a lot of boring, repetitive, manual work. Sometimes it's because software doesn't exist to solve it. Sometimes software exists, but it's too expensive. Sometimes the manual work is invisible, and sometimes there's just not enough time in the day to automate the manual work because there's too much work to do.
So companies hire people to do that work, either as full-time jobs or as part of their jobs. For the longest time, writing code was expensive, and companies hired lots of software developers.
AI tools can make a lot of that work go away very quickly.
I have had a backlog of personal software projects I've needed to get to, and I didn't have the time to get to any of them. But once I started using Claude Code, I was able to chew through that backlog pretty quickly, as long as I made sure to verify everything the coding agent did. It absolutely cut corners sometimes, and it made some horrible decisions that I vetoed. But at the end of the day, it saved me a bunch of time and let me scale myself.
AI tools help burn through mundane and repetitive tasks. The problem is that these tools are prediction engines. They don't actually think. That's where humans come in.
The Human Factor
Humans have judgment. Humans have taste. And more importantly, humans can be held accountable for how they use their judgment and taste.
Two years ago, Klarna laid off support staff, replacing them with AI agents. The problem is that those AI agents focused on closing tickets quickly. They didn't have the institutional knowledge or judgment to know when to override a procedure or bend a rule to do something to make a customer happy. They had to hire people back.
You've no doubt heard the story of the lawyer who used ChatGPT to research his client's case. He was fired because ChatGPT made up the sources. The lawyer substituted his judgment for that of a robot and was held accountable. That's the same issue Klarna ran into. In both these cases, the idea of saving money or time won out over judgment and accountability.
AI tools can't think, but they can help you arrive at a decision much faster. You can feed an LLM facts or data, and it can help you sift through them, identify patterns, or even find gaps. Then you can verify its work and apply the experience you've built up over years of defining your tastes and honing your judgement.
That's what companies are hoping for. Instead of 10 software engineers on a project, a company may only need a team of three and a budget for tokens. Those three engineers use their experience to guide the machines. Humans are still in the loop, but there are fewer of them driving things. That keeps overall headcount costs down.
Taste and judgement matter. Until they don't.
This is the real threat that some people are facing: sometimes good enough is good enough. Or sometimes the work you do doesn't require taste or judgement (or your employer doesn't think it should and won't pay for it.)
This hits creative people the hardest, and I've seen this first-hand. When the emphasis is on clicks and views and producing output, taste and judgment just don't matter. You can point back to the Klarna situation and hope that those employers downplaying taste and judgment are held accountable, thus making them put some humans back in the loop. But even if that happens, it won't bring every human back into the loop because, again, there are many things AI can do "well enough" that the extra effort has diminishing returns.
Another place you'll face problems is if the work you do doesn't require taste or judgement, and you are just doing lots of work that AI can do. If you're building a large external system that millions of people use every day, you have to put lots of effort into your user experience, and you need your code to be performant. But smaller, internal apps have less strict requirements. And they often don't even need a software engineer. We've seen this before. In the early 2000s, people ran whole businesses on Microsoft Access running on their desktops. And most small businesses don't build websites by hand either; they use online tools or content management systems with prepackaged themes. Could a seasoned engineer build these better? Absolutely. But it would cost more, and the return on investment would be hard to justify.
Don't outsource your experience
You'll find yourself in trouble very quickly if you choose to outsource judgment and taste to AI. I've seen people brag about how they used AI to draft a strategy document or even write performance evaluations. Asking the AI to make decisions or proposals has major impacts on those around. you. An AI assistant might come up with a strategy that works on paper, but you know it wouldn't work under the current circumstances because you know how the people you work with operate. These agents don't have your context and institutional knowledge. Use AI to summarize and collect information, but don't let it do your actual job for you. People notice. And they'll notice that they won't need to pay you; they can replace you with the AI you replaced yourself with.
But you can also fall into this trap accidentally, especially when you work on things you don't understand well. When you don't know what you don't know, you can't drive the right outcomes. You won't catch the hallucinations, the made-up facts, or the corners the AI cut. You won't understand what it produced. You might get something to work, but it's more luck than anything.
Invest in taste and judgement
As an industry, we've placed so much emphasis on writing code, but if you talk to any senior developer with more than 10 years of experience, they'll tell you that knowing what code to write is much more important. The more things you build, the more projects you work on, the better you get at defining requirements, understanding edge cases, and knowing what works and what doesn't.
Your job has always been to make your employer more money than they pay you by using your talents and experience to solve their problems. They're not paying for your lines of code or words you write. Code is just one tool you use to solve problems. Now that AI can write code for you, use your taste and experience to scale yourself and drive better results.
- Review what AI produces against what you would have done. When you use AI to draft code, a document, or a plan, compare it to how you would have approached it. The gaps you spot are your taste and judgement in action. The gaps you miss tell you where to grow.
- Study decisions, not just skills. Read postmortems, architecture decision records, and project retrospectives. These show you why experienced people chose one path over another, not just how to execute.
- Work on things outside your comfort zone. Taste and judgement grow when you encounter new constraints. A backend engineer who ships a user-facing feature learns things about UX tradeoffs.
- Get involved in ambiguous situations. Volunteer for the projects where requirements are unclear, where there's no obvious right answer. Those are the ones that force you to exercise judgment rather than follow a playbook.
- Seek feedback on your decisions, not just your output. Ask your peers and manager whether they would have made the same call, not just whether the deliverable looks good.
If companies do shift to smaller teams assisted by AI, they also have an opportunity to do more projects and generate more revenue instead of cutting costs. That means more jobs, especially for people who have honed their judgement and taste.
Things To Explore
- rulesync is a CLI tool that lets you write rules once and generate them for different AI assistants, so you don't have to duplicate them across multiple AI assistants.
- h-m-m or "hackers mind map" is a terminal-based tool for working with mind maps.
Parting Thoughts
Over the next month, do a self-evaluation of the work you do daily. Ask yourself these questions:
- What work am I doing that requires my judgment to review?
- What work am I doing that's repetitive, busy work that I could delegate to AI
Once you've done that, try these two challenges:
- Pick a project you've done before. Try to drive an AI assistant to recreate a scaled-down version of it. Review the output you get. Compare it against what you would have done. Then try again with a different prompt, attempting to fill in the gaps. Keep doing this until you get something you're happy with. Doing this against a problem you deeply understand will help you understand the tools' limitations, and will also help you get more comfortable with how you explain things to the tools.
- Pick something you have to do but doesn't actually require much judgment or taste. Dedicate some time to exploring how you can automate it. Your goal should be to reduce one thing from your monthly to-do list. You don't have to automate it with AI, but you could use AI to assist you in creating the automation.
Thank you for reading.
You just read issue #50 of Code, Content, and Career with Brian Hogan. You can also browse the full archives of this newsletter.