Orders will be placed June 30th
The Arkansas State University Guidelines and Considerations for the Ethical Use of Artificial Intelligence (AI) were developed in Spring and Summer of 2024 by the campus Artificial Intelligence Policy Committee, which consisted of a group of faculty and administrators from across campus. The Guidelines were informed by commonalities among policies at a variety of other institutions and they incorporate existing A-State tools and platforms. The intent is that these guidelines accommodate a diverse spectrum of opinions regarding the use of AI, and that faculty retain the authority to determine the scope of acceptable usage of AI in their courses.
Faculty are advised to include their AI policies in their syllabi, and encouraged to use the provided Guidelines to assist in determining what those policies are, and how to state them clearly.
Overview and Rationale
Within the current educational landscape, the integration of Artificial Intelligence (AI) technologies holds the potential to enhance all aspects of the teaching and learning process and may streamline various administrative processes as well. To operate effectively and competitively as an institution of higher education, Arkansas State University must maximize opportunities to benefit from these rapidly evolving technologies. At the same time, however, A-State must conscientiously address ethical considerations to guarantee the responsible and fair utilization of these tools, aligning with our institution's fundamental values and principles.
Human-Centered Approach
Institutions of higher education, including Arkansas State University, were founded and designed to enhance human potential, and human intellect and capabilities clearly remain central to the mission of this institution. While artificial intelligence (AI) has made significant advancements in replicating certain human abilities, it lacks most aspects of human cognition, emotion, creativity, and consciousness. Consequently, A-State’s approach regarding the use of AI likewise should be a human-centered approach, giving precedence to the implementation of AI systems centered around human needs. Such systems should be discussed, and their use crafted to enhance human intelligence and expertise rather than attempting to supplant them.
Ideal human-centered practices regarding AI use include, but are not limited to:
Communicating Diverse Approaches and Needs
Amid the evolving contexts of higher education and the dynamic AI market, the adoption of standardized, universally applicable AI policies proves impractical in the long run. Such policies may fail to accommodate the diverse perspectives instructors hold regarding AI integration in their teaching practices. Given the diverse array of roles and potential applications for AI within an institution of higher education, it is imperative to accommodate a variety of approaches that encompass the needs and requirements of all involved in the teaching and learning process.
Establishing transparency and clearly communicating expectations will foster an atmosphere of trust within a fair and inclusive approach. It is the responsibility of each college, department, and individual faculty member to establish clear channels of communication with students regarding the use of generative AI in their programs and courses. Explicitly communicating the extent to which AI may or may not be used by students in each course or program will provide concrete guidelines regarding the ethical use of AI by students for academic purposes.
Policy Considerations
As colleges, departments, and faculty members develop AI policies and syllabus statements, their reasoning should be rooted in the intellectual content and expectations of their courses and their disciplines. They should contemplate questions such as: What might students gain or lose through the integration of generative AI in the course? What aspects of AI and intellectual development do they aim to convey to students? Instructors are encouraged to articulate the rationale behind their policies to their students and initiate discussions about AI use with them. Such a process offers opportunities to engage students in discussions to better understand and to help them expand their levels of AI literacy.
AI Literacy
Generative AI is rapidly evolving as a novel information resource, and the guidelines for citation and its overall utilization are still in flux. Instructing students on when and how to use AI tools and how to cite content generated from AI sources alleviates the cognitive burden on students who may be uncertain about adhering to the policy regarding Academic Misconduct as it appears in the A-State Student Handbook.
Discussing AI use with students also gives faculty members the opportunity to point out its limitations. In addition to their lack of consciousness, emotion, and creativity that is less than that of a human, current generative AI tools create output that may include incorrect citations, instances of various types of bias, and other inaccuracies. As students become cognizant of these limitations, they are empowered to proactively adopt AI tools in a manner that is ethically sound, efficient, and responsible.
Policy Statements Defining Acceptable Use
There are several possible approaches for addressing AI use at the program or course level. Under each approach, statements appearing in syllabi or within specific assignment guidelines should fully explain the circumstances under which students may use generative AI. Furthermore, if AI tools are restricted in any capacity, it is crucial to engage in discussions with enrolled students regarding the scope and conditions of their use within the course or program.
Common Language for Policy Statements
To better resolve any potential conflicts and facilitate communication about the complex topic of AI and its use, it is beneficial for all syllabi and assignment guidelines to use the same definitions for AI and its derived technologies.
Approaches for Syllabus Policies on AI Use
Colleges, departments, and individual faculty members should weigh the potential benefits of four basic approaches to the use of generative AI by students and craft statements reflecting the most effective policies for their syllabi and assignment guidelines. The University of Delaware’s Center for Teaching and Assessment of Learning fully describes these approaches on its website.
AI Detection Software
Arkansas State is well positioned to monitor the use of AI in courses, through AI detection tools such as Turnitin and the AI Detection Platform by K-16. It is critical for instructors to understand the limitations of AI detection software, and that these should be used as a guide rather than a confirmation that a student may have used AI. In addition to AI detection tools, instructors that wish to detect the use of AI could consider software that tracks keystroke input logs and records editing process (e.g. Google Suite’s version history is a good example of how this can be effectively tracked to map how a document has been formed. Draftback is a Google Chrome extension that gives users the ability to play back the revision history of any Google Doc they have edit access to).
Courses using Turnitin to review submitted assignments will see all content reviewed and specialized for student writing and is highly proficient in distinguishing between AI and human-written content. The K-16 AI Detection Platform is integrated as a Learning Tools Interoperability (LTI) tool within A-State’s Canvas LMS platform on all courses to check assignments, quizzes, and discussion boards. Instructors will have access to the administrative platform of both Turnitin and K-16 tools to ensure students are upholding academic integrity.
As outlined throughout this document, AI is an evolving technology that will require colleges, departments, and faculty to continue to also evolve with their approach when monitoring for AI use. The advantages of AI detection software can be identified as follows:
AI Detection software is also an evolving technology. By using a variety of technologies and methodologies to distinguish between human-generated and machine-generated content, these platforms provide statistically based reviews on the likelihood of AI being used in the analyzed content. The AI Detection Platform by K-16 for example, uses the same artificial intelligence model by OpenAI as ChatGPT and other leading AI-driven platforms that are readily available.
Guidelines for Use of AI Detection
It is important to note that any AI detection platform should not be used as a sole source of identifying academic dishonesty when considering AI use. While AI tools can be valuable assets in identifying academic dishonesty, their use should be carefully considered and balanced with ethical considerations, transparency, and the educational mission of fostering an environment of trust and learning. Instructors should adopt the following best practices when utilizing AI Detection software:
(Last updated May 2024)