In general:
- Wikipedia: now mostly reliable on points of fact (when was so and so born, who were their parents, where do they teach, what are their most famous works) and completely unreliable on matters of interpretation. That is: you may quote Wikipedia for the former matters, but you are responsible for the interpretation and must come up with your own.
- ChatGPT (and any variations on it): it can be surprisingly helpful and amazingly misguiding if not absolutely wrong. Assume that the faculty uses ChatGPT (and assume that the faculty assume that students use it)... Again: all matters of interpretation (what to say about some work) and expression (how you say what you want to say) are your personal responsibility---whoever you might have consulted. Specifically:
- If you use any of these tools, mention it (in a footnote if you used them mostly for grammatical reasons). If your use involves quoting from an answer to some question, do quote (e.g. ChatGPT answer to the following question: [MENTION THE QUESTION!]
- Do not trust the response anymore than you should trust a third level source (e.g. in a text book or such). By design AI is such a source summarizing what a lot of people said about other people writing about some author.
- For some fun, check an AI review (published by Academia.edu) of one of my paper. Note a few things:
- the review is about as long as the article;
- it includes a lot of evaluative terms "sophisticated," "robust," "critical engagement," etc. These do not add anything to the argument.
- my review of this review is that it flattens the article.
- In summary: read the article itself!