With most advanced interfaces or tech, people adapt their interactions to the tool as well. If the tool is capable of evolution, then there is perhaps an equilibrium point where people can be productive with the tool in an idiosyncratic way. This doesn't seem to be considered by people proposing AI agents. "Enactive cognition" - where cognition is compared to the way a cell and its environment mutually define the boundary between each other - is perhaps the model to pay attention to here.