This is great! In a way it's like the output was a question rather than an answer.
This is quite similar to education research on Teachable Agents [1], which is an embodiment of the idea that to know something you must be able to teach it to someone else. In Teachable Agents you teach the computer rules (e.g. how ecosystems work), and then it gets a quiz where it compares the output to the right answer. When its wrong, you as the teacher must figure out whether the rules you taught it were correct and/or if it needs more rules.
Teachable Agents works for things where there's a right or wrong answer, because the computer is doing the test proctoring. But in your method the human is doing that, and I think it works quite well for things of a more qualitative nature like the arts, with the human playing the role of the critic or curator.
This is quite similar to education research on Teachable Agents [1], which is an embodiment of the idea that to know something you must be able to teach it to someone else. In Teachable Agents you teach the computer rules (e.g. how ecosystems work), and then it gets a quiz where it compares the output to the right answer. When its wrong, you as the teacher must figure out whether the rules you taught it were correct and/or if it needs more rules.
Teachable Agents works for things where there's a right or wrong answer, because the computer is doing the test proctoring. But in your method the human is doing that, and I think it works quite well for things of a more qualitative nature like the arts, with the human playing the role of the critic or curator.
[1]: https://slate.com/technology/2015/04/teachable-agents-making...