A female presenting person uses ChatGPT from her phone while studying.College students have been given tools like ChatGPT, but not the direction on how to properly use it. Photo courtesy of Pexels.

California State University calls itself “the first and largest AI-powered public university system,” a sweeping promise to make tools and training available to 460,000 students and 63,000 faculty and staff. The press release makes grand claims, such as “AI Educational Innovations” and an AI Commons meant to transform teaching, learning, and research. The execution, however, feels less like a well-thought-out policy initiative and more like they prompted ChatGPT to make it AI-powered.

At CSUSB, faculty and student-support leaders describe a rollout that caught them off guard, provided little required training, and left both instructors and students improvising ethical boundaries on their own.

Dr. Jessica Luck, chair of CSUSB’s English Department, said faculty learned of CSU’s “AI-powered” identity at the same time the public did. “We felt just as blindsided as the students,” she said. “We found out when we read it in the newspaper.”

The English department has tried to address the confusion internally by hosting informal lunch forums where professors share the changes they are making in their classrooms. Despite some optional online information pages and a training course, Luck said the department received no direct training or consistent direction. “We’re not about forcing policies down people’s throats,” she said, “but we were given no heads-up. Everyone is just scrambling to figure out how to handle it.”

The bigger issue for Luck and her department is the education journey the students will take. “Asking AI to write your paper,” she said, “is like asking a robot to lift the weights for you in kinesiology class. You skip the work that builds your brain.” She stressed that writing is not about producing grammatically perfect sentences; it’s about the intellectual effort of forming ideas. “We want students to struggle a little,” she said. “That’s how they learn to think.”

Amid heavy workloads, a few faculty members have still managed to take CSU’s training and bring what they learned back to their departments. Dr. Sunny Hyon, a linguist in the English Department and one of the two English department faculty members to have completed the Academic Applications of AI (AAAI) microcredential, described the program as “informative,” praising its modules on ethical use, prompt creation, and classroom application.

In one of her courses, Hyon asked students to open ChatGPT and request a list of potential careers for English majors. The results varied from student to student, prompting a discussion of how AI-generated lists differ and how the tool could help identify gaps in the university’s online information on career paths.

Her example demonstrates what meaningful AI literacy can look like when professors have the time and support to implement it. But that is precisely what most CSU instructors lack. The AAAI course is optional, must be completed on the professor’s free time, and is offered without follow-up. “It’s helpful,” Luck said of Hyon’s experience, “but it shouldn’t depend on whether a few faculty members happen to take a voluntary class.”

For Nathan Jones, director of the CSUSB Writing Center, the changes are more behavioral than structural. “We haven’t overhauled how we work,” he said. “We meet students where they are and collaborate on goals. But we’re seeing new patterns in what students bring in.”

Students now arrive with drafts often heavily crafted with AI. Jones and his staff approach those papers the same way they always have: ask about intent, discuss citations, and discuss ethical use. Jones pointed to a more potentially sinister side of AI usage when he said, “Watching Grammarly or an AI suggestion steer a student mid-sentence is like seeing another agent seize the wheel. If you accept uncritically, your thinking shifts without you noticing.”

The Writing Center has tried to keep up through self-study and staff reading. They recently discussed a Harvard Business Review article on “AI work slop,” meaning work produced carelessly by AI and handed off to others to clean up. The piece raised questions about trust and collaboration that the center is increasingly seeing in student writing.

Jones confirmed that Writing Center staff received access to the same optional AAAI training. None has completed it. “Our student employees are capped at 20 hours a week,” he said. “If the training takes ten hours, that’s half their week. When it isn’t required, it’s hard to justify.”

Jones related an anecdote in which he asked a group of upper-division and graduate students whether they had ever used AI in their work, and every hand went up. When he followed by asking who had ever cited AI in their writing, every hand went down. When he asked who had used the university-provided AI, no hands went up. “The biggest effect of providing a campus AI might be the signal that using AI is sanctioned,” he said. “Students may not use the campus tool, but they feel permitted to use AI elsewhere.”

The Writing Center has already expanded its resources to meet the moment. It now provides handouts on citing AI in APA and MLA formats and offers workshops for graduate students on documenting AI use in their theses and dissertations. Tutors also advise students accused of AI use, some of whom are wrongly accused, on how to demonstrate process and authorship. “We’re seeing more of that,” Jones said. “Students who didn’t use AI and don’t know how to prove they didn’t.”

On Oct 28, CSUSB hosted what was described as an “IRA-supported CSUSB Student Success Workshop focused on helping students integrate AI into the learning process” led by Professor Viktor Wang from the Department of Educational Leadership and Technology. The event, available to CSUSB students via email on Zoom, promised to help students understand how to “ethically integrate AI into their educational journeys.”

About 20 people attended, several of them student presenters. The session covered topics such as how generative AI works, responsible use, and examples of classroom applications. It introduced useful concepts, including the “tutor, tool, topic” framework, which encourages students to learn from, with, and about technology —a solid educational model developed by Robert Taylor in 1980.

Still, much of the presentation blurred the line between enthusiasm and accuracy. Some statements, such as that “AI launched in November 2022,” that “70 percent of Hollywood movies are made with AI,” or that the CSU spent $16.9 million investing in ChatGPT, were misleading or incorrect. Others overstated AI’s cognitive abilities or understated its current reach, suggesting that GenAI created new knowledge or that AI can’t access the internet. Wang’s student presenters often brought the discussion back to practical ground, emphasizing verification, transparency, and bias awareness, but the workshop was certainly not official university training, more like a panel discussion of enthused AI users.

The event illustrated a recurring CSU pattern: initiatives advertised as part of a major AI rollout that, in practice, amount to small, voluntary sessions attended by only a few students. The workshop’s goals —ethical literacy, critical use, and awareness of bias — were admirable, but its limited attendance, less-than-factual generalizations, and lack of formal integration into coursework underscored how far the CSU still has to go in turning “AI Powered” into a usable tool for students and faculty.

CSU’s Academic Senate seems to share that lack of direction. Its resolution, The Possible Use of AI in Instruction, explicitly states that the university “has no intention of requiring that faculty use GenAI.” It recognizes that “some students will inevitably use GenAI” and encourages instructors to adapt assignments accordingly. But the resolution stops short of mandating training or offering clear standards. It is, in essence, an acknowledgment that the system is figuring things out as it goes.

Meanwhile, CSU’s AI Commons site includes several thoughtful documents: an AI Literacy Literature Summary, an Ethical Principles Framework, and Guidelines for Faculty Regarding AI in Instruction. They all stress ethical use, transparency, and critical engagement. But each document also describes itself as non-prescriptive. The framework “does not dictate practices,” the literacy page “invites reflection,” and the faculty guidelines “encourage local adaptation.” They are well-meaning and toothless.

The result is a patchwork. Some departments embrace AI experimentation; others ban it outright. Some students learn to prompt critically; others learn to hide their use of prompts. Professors are left to interpret vague assurances of “empowerment” without clear rules, while support centers field the fallout. Students risk using tools provided by their university that could get them kicked out.

The stakes are not theoretical. AI use in writing and research raises practical and ethical questions that require literacy and education, not thoughts and prayers.

Students who use AI haphazardly skip the cognitive exercise that writing is meant to teach: synthesizing, analyzing, and articulating. Students who never use it may graduate unprepared for workplaces that now expect AI fluency. Professors who ban it entirely may encourage academic integrity but miss opportunities to teach ethical usage. Professors who allow it without guardrails risk unintentional plagiarism.

Even well-intentioned integration can go wrong. An art student, who asked to remain anonymous, said they were required to use generative AI to create images in a class designed to teach drawing. The student objected on environmental and ethical grounds, but was told they had to complete the assignment using AI. CSU providing ChatGPT without structured guidance gives students permission to cheat and gives professors the power to force its use.

The broader irony is that CSU’s systemwide announcement seems to have legitimized AI use more than it has educated anyone about it. Jones calls it a “signal effect.” The existence of a CSU-branded chatbot implies safety and approval, even if few use it. That’s what happens when policy is superseded by the press.

There are workable solutions. The CSU does not need to ban AI, nor does it need to glorify it. It needs to teach it. The AAAI microcredential could become a required baseline for all teaching faculty, with compensation for completion and discipline-specific modules. Every professor, from art to zoology, should know how AI works, where it fails, and how to integrate or restrict it responsibly.

Before orientation, every incoming CSU student should complete an AI literacy and ethics module. CSU already requires training on Title IX, alcohol safety, and information security. AI usage deserves the same level of importance. Students need to know how to cite AI, when not to use it, and how to document their writing process to protect themselves.

Faculty should be provided with a clear menu of policy statements ranging from full prohibition to guided integration, so they can choose one, adapt it, and include it in every syllabus. Consistency does not mean uniformity; it means transparency.

Student-support units like Writing Centers and Libraries should receive paid time and funding to train staff and update resources. Compensation for faculty and staff is a far better way to spend 17 million dollars than simply providing every student with a ChatGPT account. That is where policy becomes practice.

If CSU is to remain credible in its claim to be “AI-powered,” it must report progress annually: how many faculty have completed training, how many students have completed literacy modules, what learning outcomes are observed, and what gaps persist.

It’s not about being against AI; it’s about preparation. Faculty like Jessica Luck and Sunny Hyon are already doing the real work of adapting pedagogy. Staff like Nathan Jones are translating theory into student support. And educators like Viktor Wang are in the trenches with students. They should not be left to shoulder an institutional experiment alone.

Artificial intelligence is not going away. Students will continue to use it. Professors will continue to debate it. Universities will continue to market it. Corporations will continue to demand fluency. But the difference between being “AI-powered” and being “AI-prepared” is the difference between a slogan and an education.

For now, the CSU’s initiative feels more like branding than transformation. It promises empowerment but delivers ambiguity. The tools have been handed out. The safety manual is optional.

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php