Student Use of Artificial Intelligence (AI)
1. PURPOSE
This policy establishes guidelines for the ethical, responsible, transparent, and appropriate use of AI technologies by students in academic activities, such as (but not limited to) didactic courses and clinical training. It aims to promote deeper learning, critical thinking, foster innovation, and prepare students for the use of AI in the workforce, while upholding academic honesty and ensuring equitable access to learning. Students are encouraged to use AI tools thoughtfully and responsibly, recognizing that AI technologies have environmental impacts related to energy and resource consumption.
2. SCOPE
This policy applies to all students enrolled at Bastyr University, across all programs and levels of study, including undergraduate, graduate, and professional education, as well as certificates.
3. DEFINITIONS
- Deeper learning: Developing comprehensive understanding and skills beyond memorizing facts; ability to apply knowledge to new situations, think critically, solve problems, and collaborate effectively. It includes competencies such as academic content mastery, critical thinking, communication, collaboration, creativity, and an academic mindset.
- AI Tools: Software or platforms that use machine learning, natural language processing, or other forms of artificial intelligence to generate, analyze, or assist with content, work products, processes, evaluation of research, etc. (examples might be, but are not limited to, ChatGPT, Grammarly, Copilot, or Gemini).
- Academic Activities: Any assignment, project, exam, presentation, class participation, recording or transcription, or research submitted for evaluation, credit, or course completion requirements. It includes activities where the university is represented (conferences, practicum, internships, etc.)
- Clinical Training: Any structured, supervised hands-on practice, in real or simulated health care settings, that allow students to apply knowledge, develop clinical skills, and demonstrate competencies in caring for patients (e.g., charting, diagnosis, etc.)
4. GENERAL GUIDELINES
- Transparency: Before engaging in the work, students must gain approval from the relevant instructor or clinic supervisor for the use of AI tools. If approved, students must disclose the use of AI tools in academic and clinical work and settings, including the name of the tool, the nature of its contribution, and the prompt(s). While APA style is the primary citation style at Bastyr, students shall refer to the MLA Manual of Style for instructions on how to report the use of AI in their work products.
- Instructor Discretion: Instructors may set additional specific rules not stated in this policy regarding AI use in their courses. Such rules must be clearly communicated (e.g., in syllabus, Canvas page, etc.). Students are responsible for understanding and adhering to these rules.
- Originality and Integrity: AI-generated content must not be submitted as original work unless explicitly permitted. Students remain accountable for the accuracy, originality, and ethical standards of their submissions.
- Privacy and Data Security: Students must not input sensitive personal or institutional data into AI tools. Sensitive information includes content shared by instructors and clinical supervisors (please refer to the school’s copyright policy), patient information (please refer to HIPAA), and institutional data (please refer to FERPA).
5. PERMITTED USES
- Use of AI tools must comply with any course-specific guidelines provided by an instructor and must not violate the academic honesty policy of Bastyr University.
- Drafting outlines, brainstorming ideas, or improving grammar and style, when allowed by the instructor (students must seek approval before engaging in the work).
- Learning support, such as tutoring or concept explanation, provided it does not generate required coursework.
- Data and statistical coding assistance, data analysis, or visualization in technical courses, if permitted.
6. PROHIBITED USES
- Submitting AI-generated work without disclosure or permission.
- Any usage that is not aligned with the permitted use defined in section 6 of this policy.
- Misrepresenting AI-generated content as personal or group work. Using AI tools to generate work that is submitted as the student’s own original analysis, reasoning, or problem-solving when the assignment requires those skills.
- Using AI tools in ways that violate the assignment’s stated requirements, such as producing explanations, analyses, or solutions that the student is required to complete independently.
- Using AI tools to produce written, quantitative, or analytical work that the student is expected to create through their own reasoning or interpretation, unless explicitly permitted by the instructor.
- Using AI to impersonate others or fabricate data, citations, or research findings.
- All use of AI for charting notes from patient encounters is strictly prohibited. However, students may use AI for case research while remaining HIPAA-compliant. Because generative AI tools may not be HIPAA-compliant, students shall not input any confidential and protected patient/client information into any AI software.
7. ENFORCEMENT AND CONSEQUENCES
AI detection tools should not be solely relied upon to determine academic misconduct because they are not fully reliable or unbiased, especially when evaluating work by culturally diverse students and non-native English speakers. These tools often misclassify original writing as AI-generated due to differences in grammar, vocabulary, and sentence structure. Relying exclusively on them can lead to false accusations, disproportionately affecting culturally diverse students and English-as-Second-Language (ESL) students, and undermining fairness in academic evaluations. A balanced approach that includes human judgment, context, and student input is essential to ensure accurate and equitable outcomes. All members of the academic community share a responsibility to critically examine and guard against potential bias when using AI or when determining whether AI tools have been used.
Violations of this policy will be treated as breaches of academic honesty, and thus the school’s academic honesty policy will apply for disciplinary action, due process, and appeal process.
8. USE of AI TOOLS FOR LANGUAGE SUPPORT
Recognizing the linguistic challenges faced by some students, Bastyr University permits the use of AI tools (e.g., free version of Grammarly or grammar and spelling functions in Word) for language support purposes, such as grammar correction, vocabulary enhancement, and clarity improvement. These uses are considered acceptable provided they do not generate substantive academic content or circumvent learning objectives. Students must disclose any AI assistance used in their work to their instructor or supervisor. Faculty are encouraged to exercise discretion and consider the context of ESL students when evaluating submissions. This policy aims to ensure equitable access to language support while upholding academic integrity.
9. ETHICAL AND AND RESPONSIBLE USE OF AI
The institution encourages students to engage thoughtfully and ethically with AI tools. Responsible use of AI includes consideration of broader societal impacts, including environmental sustainability, and equity. Students are expected to:
- Be mindful that AI systems consume significant computational resources and, when possible, avoid unnecessary or excessive use.
- Recognize that AI tools may reflect biases related to race, gender, culture, language, or socioeconomic status, and critically evaluate AI-generated content rather than accepting it as neutral or authoritative.
- Take responsibility for verifying the accuracy, appropriateness, and fairness of any AI-assisted work.
10. REVIEW AND UPDATES
This policy will be reviewed as needed but no less than annually by the Policies and Standards Committee and updated to reflect technological developments and pedagogical best practices.