The content contrasts two approaches to using AI coding assistants through a specific scenario of fixing endpoint latency. Engineer A uses a directive prompt ('fix the latency in this endpoint') that yields a direct solution, but delegates judgment to the AI and only considers what's visible in the code file. Engineer B uses a questioning prompt ('what constraints should I consider before changing this endpoint?') that surfaces constraints and keeps the engineer engaged in reasoning through the problem. The key argument is that the type of question asked determines whether you maintain or give away judgment and system understanding. The creator warns that repeatedly asking 'what should I do' leads to gradually losing comprehension of your own system over six months. The main thesis is that AI should function as a thinking partner that enhances judgment rather than a replacement that automates decision-making, and this relationship is defined by how you frame your prompts.
The type of prompt you use (directive vs. questioning) determines whether AI keeps you in the loop or replaces you
High confidence
Directive prompts like 'fix the latency' cause the model to only consider what's visible in the code file, potentially missing upstream service constraints
High confidence
Questioning prompts that ask about constraints keep engineers fluent in the system by making them do the reasoning
High confidence
Repeatedly asking AI 'what should I do' will result in dealing with a system you no longer understand within six months
Medium confidence
The question you ask creates a boundary line between judgment you kept and judgment you gave away, and this effect compounds over time
High confidence
No vendors were mentioned.
The creator's overall position toward the main topic discussed.