Artificial intelligence (AI) are the two words that either elicit excitement or cause anxiety in people.
Artificial intelligence (AI) are the two words that either elicit excitement or cause anxiety in people. The latter is probably a result of movies such as the Terminator franchise, I Robot or the latest horror movie about AI, M3GAN. But don’t worry, Skynet is not a reality.
What we, the Advocacy & Welfare team, are encountering this year are cases of students using AI-powered writing assistants such as ChatGPT and Quillbot to write their assignments. Students who use AI to complete their assignments are considered by the university to have breached Academic Misconduct, and if found guilty, they will get 0% for their assignment and have their behaviour recorded and kept on the university’s Misconduct Register for 10 years. In severe cases, the student could receive a zero for the entire course. There is also a risk of getting an “X” grade. So, this is a serious issue.
We need to keep in mind that AI generated writing is really not great. Eric Wang, the TurnItIn Vice President for AI, has described text generated by AI-powered writing assistants as “very very average” and “uncannily average”. These AI-powered writing assistants are able to mimic the way humans write, but they are not able to mimic the individual idiosyncrasies each of us have in the way we write. In other words, the quality of work generated by AI-powered writing assistants are very basic. They can be considered good for an AI system, but are not up to the standards required from university students.
To help combat the widespread use of AI-powered writing assistants, various software companies have released AI checking tools. These are tools that can analyse a piece of writing and determine whether it is written by a human, by AI, or by a combination of both.
A significant issue with these checking tools is the rate at which they return false positives (i.e., identifying human written text as AI-generated). That means students may be falsely accused of using AI to complete their work if educators based their allegations solely on the results of AI checking tools, and this can have major impacts on the students such as:
- Damaged relationship with their lecturer;
- Unfair punishments;
- Damaged academic reputation; and
- Emotional stress and negative impact on overall wellbeing, which can then affect their studies.
Considering that Microsoft is incorporating AI into all their Office applications, including a “Copilot” feature in Word, it is clear that AI is not going away. Furthermore, we are likely to see more and more AI applications in the future.
So, what does this all mean for us at UCSA Advocacy & Welfare?
For one, the UCSA Advocacy & Welfare team will continue to support students who have been accused of academic misconduct relating to accusations of using AI-powered tools. We will ensure that the rights of students are respected, and advocate for students who have been falsely accused.
We will also do our very best to communicate with the university effectively on the matter, including encouraging the University to better communicate its expectations about what sorts of digital tools are and are not acceptable for students to use in their studies.
This is because companies that make digital tools like to portray them as safe and acceptable for students to use, and many students will simply accept these statements at face value, not realising that the University might think otherwise. As such, the University needs to communicate its expectations clearly.
Now that you know what the Advocacy & Welfare Team will do in the space of Artificial Intelligence, how about you?
What should you, a UC student, do so that you don’t get into trouble with the university?
First and foremost, do not use AI writing tools such as ChatGPT unless you have been specifically instructed by your lecturer to use it for a particular task. If you do choose to use it without permission, you will be at the risk of being pulled up for Academic Misconduct.
All universities take academic integrity very seriously. In the US, education institutions can file criminal charges against students based on academic misconduct. Not earning a grade through the student’s own academic merit is basically defrauding the education institution and it is considered as a type of fraud. This can therefore lead to criminal prosecution in the US.
Students are not likely to be criminally prosecuted for academic dishonesty in New Zealand, but it can still get you into serious trouble with the university. In the worst case scenario, students who are found guilty of cheating can get an “X” grade in their official transcript, which means “Dishonesty”. This “X” grade will stay with your academic transcript forever, and your official academic transcript is something your potential employer may ask for. All employers look for staff members who have integrity, and it is one of the questions referees are asked in all reference checks. As such, there is nothing worse than having your potential employer see an “X” grade on your official academic transcript, since that implies that you are a dishonest and untrustworthy person.
Secondly, as mentioned above, the writing produced by AI writing tools is really mediocre. If AI writing is really not that good, then why would you risk your future career by using it? You may think that it is too hard for the university to find out that you have used AI in your assessment because there are no reliable existing tools to tell them whether a piece of writing is done by a human or by AI. Please note, however, that there might be one in the near future with how quickly technology advances. There are also other ways the university can figure out if a piece of work is done by the student or not. For example, ChatGPT has the knack of creating fictional references.
That’s right, ChatGPT makes up fake references and includes them in its writing.
Students are not the only ones who have been caught because of fake references. There is a case reported in May this year that a New York lawyer was charged in court for using ChatGPT for legal research, whereby the very “helpful” ChatGPT gave him non-existent examples of legal cases. The BBC article wrote:
“Screenshots attached to the filing appear to show a conversation between Mr Schwarz and ChatGPT.
"Is varghese a real case," reads one message, referencing Varghese v. China Southern Airlines Co Ltd, one of the cases that no other lawyer could find.
ChatGPT responds that yes, it is - prompting "S" to ask: "What is your source".
After "double checking", ChatGPT responds again that the case is real and can be found on legal reference databases such as LexisNexis and Westlaw.
It says that the other cases it has provided to Mr Schwartz are also real.”
Basically, ChatGPT writes poorly, lies, has no integrity, and cannot be trusted. So why risk your future by using it?
The best thing for you to do is to not use AI writing tools for your university work. That is, unless you have been instructed by your lecturer to use it for a specific piece of work.
But if you do use it one day, and are in trouble with the university, it is not the end of the world. We know that students rarely resort to this sort of conduct out of a desire to cheat their way into getting a degree. Instead, it is usually a combination of poor time management, difficulty in understanding their assessment, and a fear that they will be disadvantaged if they do not also use these tools, that results in students engaging in academically dishonest conduct.
We will help and support you in any meetings with the Academic Integrity Officer of your Department/School, or the Proctor. We will also support you to find better ways of overcoming any barriers and find success in your studies.
So, if you or any of your friends are accused of using AI to complete your assignments and would like some help, contact us. And if you encounter anyone who is unsure as to whether it will be a good idea to use AI, encourage them to contact us too.
Who are we?
We are the UCSA Advocacy & Welfare team, and you can contact us at firstname.lastname@example.org.
Important note: This piece was written without the aid of an AI-powered writing assistant.
… or was it?
Written by Dr Ee-Li Hong 方一立 PhD (Psychology)
- The Advocacy & Welfare Team