The first thing all users of ChatGPT, Jasper et al, should know is that they only regurgitate what is already known and that it has access to. This shows a bias, one from a Computer Engineering point of view, apart from the biases that are already used to write doctorates, marketing and extremist views etc. An anecdote from Facebook talks about an artificial intelligence program that after 24 hours can to the conclusion the “Hitler was right”. This makes it definitely a product of its inputs.
Science fiction fantasy writer Raymond E Feist asked ChatGPT what it knew about himself, it managed to have him living in the wrong place, incorrect age and that he was still married, even though he had divorced 22 years previously. The engine is only as good as the fuel you put into it. Outputs are not reliable.
Most recently, a lawyer in the USA is currently being remonstrated by a High Court Judge because he didn’t understand what ChatGPT is. https://www.abc.net.au/news/2023-06-13/us-lawyer-faces-sanctions-over-chatgpt-research/102474176
In a time where Cyber Safety and Psychosocial Safety are at the forefront of everyone’s mind, it is important to remember that the source of your information is vital. Random AI sourced information will not stand up in court if the information is not discerning. Bad advice that is given digitally is the same as bad advice given verbally. At least verbally you can cite the source.
Real Learning doesn’t come from asking a computer, computers are not capable of thought, emotion, embodiment, trust, error, or metaphor. Learning is not a transfer of data/content. You cannot question the outcome. The process of learning is critical to understanding. Most importantly, it is not transdisciplinary.