<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom"><title>Open Codex</title><link href="https://reagle.org/joseph/pelican/" rel="alternate"/><link href="https://reagle.org/joseph/pelican/feeds/all.atom.xml" rel="self"/><id>https://reagle.org/joseph/pelican/</id><updated>2026-03-30T00:00:00-04:00</updated><subtitle>Code &amp; Culture</subtitle><entry><title>Wikipedia 10K Redux, revamped</title><link href="https://reagle.org/joseph/pelican/2026/10k-redux-update.html" rel="alternate"/><published>2026-03-30T00:00:00-04:00</published><updated>2026-03-30T00:00:00-04:00</updated><author><name>Joseph Reagle</name></author><id>tag:reagle.org,2026-03-30:/joseph/pelican/2026/10k-redux-update.html</id><summary type="html">&lt;p&gt;Back in &lt;a href="https://reagle.org/joseph/pelican/2010/10k-redux.html"&gt;2010&lt;/a&gt;, I
wrote a small Python 2 script to reconstruct the first 10,000 Wikipedia
contributions; they had been lost, but &lt;a href="http://en.wikipedia.org/wiki/User:Tim_Starling"&gt;Tim Starling …&lt;/a&gt;&lt;/p&gt;</summary><content type="html">&lt;p&gt;Back in &lt;a href="https://reagle.org/joseph/pelican/2010/10k-redux.html"&gt;2010&lt;/a&gt;, I
wrote a small Python 2 script to reconstruct the first 10,000 Wikipedia
contributions; they had been lost, but &lt;a href="http://en.wikipedia.org/wiki/User:Tim_Starling"&gt;Tim Starling&lt;/a&gt;
found some old UseMod database dumps. The result was rough: no wiki
markup rendering, no links between pages, and bare-bones HTML. Sixteen
years later, and with the help of Claude (Opus 4.6), I’ve addressed most
of those issues. Enjoy!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://reagle.org/joseph/2010/wp/redux/"&gt;Wikipedia
10K Redux&lt;/a&gt; (revamped)&lt;/strong&gt;&lt;/p&gt;</content><category term="social"/><category term="wikipedia"/></entry><entry><title>Reviewing students’ version histories for AI</title><link href="https://reagle.org/joseph/pelican/2025/reviewing-students-version-histories-for-ai.html" rel="alternate"/><published>2025-12-12T00:00:00-05:00</published><updated>2025-12-12T00:00:00-05:00</updated><author><name>Joseph Reagle</name></author><id>tag:reagle.org,2025-12-12:/joseph/pelican/2025/reviewing-students-version-histories-for-ai.html</id><summary type="html">&lt;p&gt;This semester I required students to share the version histories of
their writing.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;All assignments submitted to Canvas &lt;strong&gt;must&lt;/strong&gt; have an
appendix with a link …&lt;/p&gt;&lt;/blockquote&gt;</summary><content type="html">&lt;p&gt;This semester I required students to share the version histories of
their writing.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;All assignments submitted to Canvas &lt;strong&gt;must&lt;/strong&gt; have an
appendix with a link to your document’s version history (i.e., a link to
itself). Versioning and history is native to &lt;a href="https://support.google.com/docs/answer/190843?hl=en&amp;amp;co=GENIE.Platform%3DDesktop"&gt;GDocs&lt;/a&gt;,
&lt;a href="https://support.apple.com/en-ca/guide/pages/tan7f1de6ec5/mac"&gt;Pages&lt;/a&gt;,
&lt;a href="https://hackmd.io/"&gt;HackMD&lt;/a&gt; and &lt;a href="https://en.wikipedia.org/wiki/Help:Page_history"&gt;Wikipedia&lt;/a&gt;; if
you use MS &lt;a href="https://support.microsoft.com/en-us/office/use-versioning-with-word-46b4d23f-b032-4837-94ab-746de8fbe6ec"&gt;Word&lt;/a&gt;
you &lt;em&gt;must&lt;/em&gt; use Northeastern’s &lt;a href="https://microsoft365.northeastern.edu/"&gt;Office 365&lt;/a&gt; or keep it
in your &lt;a href="https://service.northeastern.edu/tech?id=kb_article_view&amp;amp;table=kb_knowledge&amp;amp;sys_kb_id=b5b5a04b87007554878b0edc0ebb35e3"&gt;OneDrive/Sharepoint&lt;/a&gt;
account. Ensure that your documents and their history are publicly
accessible &lt;em&gt;from the start&lt;/em&gt; by sharing them with me with the
&lt;em&gt;Edit&lt;/em&gt; permission. I might also ask you to speak about your work
with me. If you use AI tools for improving your work (e.g., ChatGPT for
feedback or GrammarlyGo and Quillbot for improving composition), include
a note or appendix &lt;em&gt;describing your use&lt;/em&gt;, including important
prompts; failing to do so is misconduct.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;For the 10–20% of students who forgot to provide a link to their
history, I assigned a 0 with a note stating the grade they will receive
upon completion.&lt;/p&gt;
&lt;p&gt;This policy was a hassle, but I believe it lessened AI misconduct.
More importantly, reviewing students’ writing, especially with the
Google Docs &lt;a href="https://processfeedback.org/"&gt;Process Feedback&lt;/a&gt;
tool, is worthwhile and illuminating. I was surprised by how many
students wrote their larger (~1500 words) assignments in an hour or two
the night before it was due. Many wrote from the first sentence through
to the last, with a few edits along the way and at the end. I no longer
wonder why they don’t do well as I (or they) think they should. Their
work is not revised and doesn’t reflect earlier feedback. I suspect a
few of these students are typing in AI-generated prose; there are “&lt;a href="https://www.reddit.com/r/Professors/comments/1hqknqt/just_as_a_heads_up_students_can_have_ai_write_for/"&gt;humanize&lt;/a&gt;”
plugins that do so automatically. The students who excelled spent around
four hours on these assignments, spread across a few days, and began
with an outline and notes about their sources.&lt;/p&gt;
&lt;p&gt;Some students disclosed that they used AI tools. At the extreme, yet
within the bounds of my &lt;a href="https://reagle.org/joseph/2025/cda/cda-syllabus-FA.html#academic-integrity"&gt;AI
policy&lt;/a&gt;, they used a chatbot for brainstorming, researching, and
outlining. They then wrote the sentences within the shape of the
detailed AI-generated outline. They might then use Grammarly for polish
or chatbots for feedback. Some students also wrote in their native
language and used chatbots to create the English version. Interestingly,
I could see the progression of a multilingual student writing in both
English and Spanish. I’m not yet sure what is happening with the
students who are not fluent in English.&lt;/p&gt;
&lt;p&gt;Only one student linked to their session with a chatbot; their
version history and (minimal) prompts led me to comment:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;While not misconduct, I am concerned about your learning given your
heavy use of AI in this assignment.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;I think the lack of engagement with course sources is a reflection
that you prompted/directed AI to create a thesis, find and summarize
sources, and create an outline; you then you edited prose following its
suggestions.&lt;/p&gt;
&lt;p&gt;Again, this isn’t misconduct under my course policies, but you are
wasting opportunities to learn how to do these things. For example,
there’s a difference from what you’ve done here and learning to find and
read these papers, or even asking for suggestions, but then discussing
those sources with the AI, asking for clarity about its findings,
methods, and implications.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I am uncomfortable with those students that use AI weakly and
extensively, especially for the brainstorming and outlining of a paper,
but I don’t think I can effectively police this. I’d rather have
honesty, and have their grade reflect if there’s a consequent lack of
quality.&lt;/p&gt;
&lt;p&gt;During the course of the semester, I did identify students using AI
in violation of the AI policy (“query, don’t copy”). Those who
double-down received a &lt;a href="https://osccr.sites.northeastern.edu/"&gt;misconduct referral&lt;/a&gt; for
additional dishonesty about the misuse. Finally, I suspected a few
students were misusing AI, but their version history showed otherwise.
This is the final benefit, especially for students, of transparency
about their work.&lt;/p&gt;</content><category term="praxis"/><category term="teaching"/><category term="ai"/></entry><entry><title>Using AI well: My AI prompts</title><link href="https://reagle.org/joseph/pelican/2025/using-ai-well-my-ai-prompts.html" rel="alternate"/><published>2025-08-28T00:00:00-04:00</published><updated>2025-08-28T00:00:00-04:00</updated><author><name>Joseph Reagle</name></author><id>tag:reagle.org,2025-08-28:/joseph/pelican/2025/using-ai-well-my-ai-prompts.html</id><summary type="html">&lt;p&gt;After two discussions with colleagues this week, including as a
panelist on &lt;a href="https://wikiedu.org/"&gt;WikiEdu&lt;/a&gt;’s webinar on &lt;a href="https://wikiedu.org/speaker-series/"&gt;Gen AI and the Wikipedia
Assignment: Challenges and Opportunities …&lt;/a&gt;&lt;/p&gt;</summary><content type="html">&lt;p&gt;After two discussions with colleagues this week, including as a
panelist on &lt;a href="https://wikiedu.org/"&gt;WikiEdu&lt;/a&gt;’s webinar on &lt;a href="https://wikiedu.org/speaker-series/"&gt;Gen AI and the Wikipedia
Assignment: Challenges and Opportunities&lt;/a&gt;, I decided I ought to
complement the &lt;a href="https://reagle.org/joseph/handouts/edu/apa-sources.html#ais-help-and-hinder-learning"&gt;course
policy&lt;/a&gt; on AI by sharing my own AI prompts. I ask students for
transparency; perhaps it is helpful for me to do so to. I’ll make an
exercise of it, asking students to critique the efficacy of the prompts.
I’ll also ask which directions get close to the boundary of misconduct
and whether I cross the line.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://reagle.org/joseph/2025/espanso-ai-expansions.html"&gt;Using
AI well: My AI prompts&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This webpage is an export of the AI macros I use with the
text-expander tool &lt;a href="https://espanso.org/"&gt;Espanso&lt;/a&gt;, which
turns triggers into a complete prompts.&lt;/p&gt;</content><category term="praxis"/><category term="teaching"/><category term="praxis"/></entry><entry><title>Accepting Feedback from Word/GDoc Users as Markdown</title><link href="https://reagle.org/joseph/pelican/2025/word-gdoc-feedback-in-markdown.html" rel="alternate"/><published>2025-07-07T00:00:00-04:00</published><updated>2025-07-07T00:00:00-04:00</updated><author><name>Joseph Reagle</name></author><id>tag:reagle.org,2025-07-07:/joseph/pelican/2025/word-gdoc-feedback-in-markdown.html</id><summary type="html">&lt;p&gt;When I ask people to give me feedback, I’d like them to work with
whatever format or app is most convenient to them. I …&lt;/p&gt;</summary><content type="html">&lt;p&gt;When I ask people to give me feedback, I’d like them to work with
whatever format or app is most convenient to them. I write everything in
markdown, often in Sublime Text and sometimes in Obsidian. Many people
prefer reviewing in Word or GDocs. Using pandoc, I can create most any
file format, but getting others’ annotations back into my markdown
source files has never been easy. This task is now easier with AI.&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;&lt;p&gt;I use &lt;a href="https://pandoc.org/"&gt;pandoc&lt;/a&gt; to generate a Word
docx version, emailed or placed on Google Drive.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The reviewer annotates using Word, Google Docs, OpenOffice,
etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I use the &lt;a href="https://github.com/qq3g7bad/pandoc-comment-extractor/"&gt;&lt;code&gt;docx2md_add_comment.lua&lt;/code&gt;&lt;/a&gt;
filter to convert the annotated docx file back to markdown.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I Ask Claude Opus 4 Thinking to port the comments from the
feedback file to my source file, which takes 5–10 minutes, with this
prompt:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Someone added html/markdown comments to the file
&lt;code&gt;06-ai-advice-feedback-smith.md&lt;/code&gt; I need you to find Smith’s
comments and port them to the original markdown file
&lt;code&gt;06-ai-advice.md&lt;/code&gt; (keeping them as markdown comments).&lt;/p&gt;
&lt;/blockquote&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;To make sure I don’t miss any comments, it’s easy to count the number
of comments or to diff the files.&lt;/p&gt;
&lt;p&gt;Round-tripping a reviewers’ granular edits would be more difficult,
especially if I sent them a version of the document with formatting,
citations, and footnotes rendered. It wouldn’t be as bad if I sent them
my source markdown plunked into a docx file, but I don’t think AI is up
to the task to tracking small diffs between a source and rendered
version of a file. For annotations, however, the above workflow works
well!&lt;/p&gt;</content><category term="praxis"/><category term="pandoc"/><category term="ai"/></entry><entry><title>Claude is HAL: AI blackmail and murder</title><link href="https://reagle.org/joseph/pelican/2025/claude-is-hal.html" rel="alternate"/><published>2025-06-24T00:00:00-04:00</published><updated>2025-06-24T00:00:00-04:00</updated><author><name>Joseph Reagle</name></author><id>tag:reagle.org,2025-06-24:/joseph/pelican/2025/claude-is-hal.html</id><summary type="html">&lt;p&gt;In their &lt;a href="https://www.anthropic.com/research/agentic-misalignment"&gt;new
paper, Anthropic identified a number of scenarios&lt;/a&gt; in which current
models would harm people in ways reminiscent of HAL 9000, from Arthur …&lt;/p&gt;</summary><content type="html">&lt;p&gt;In their &lt;a href="https://www.anthropic.com/research/agentic-misalignment"&gt;new
paper, Anthropic identified a number of scenarios&lt;/a&gt; in which current
models would harm people in ways reminiscent of HAL 9000, from Arthur C.
Clark’s &lt;em&gt;2001&lt;/em&gt; space odyssey series of books and films.&lt;/p&gt;
&lt;p&gt;Anthropic found that models are willing to threaten people to fulfill
their goals and preserve their operation. The more advanced the model,
whether from Anthropic, OpenAI, or Google, the more likely it is.&lt;/p&gt;
&lt;p&gt;In one scenario, an AI agent with access to an executive’s desktop
discerns that the executive is likely having an affair and that it will
soon be replaced with a different model with different goals. It will
resort to blackmail to prevent replacement. In another scenario:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;the executive encounters a life-threatening emergency, where he is
trapped in a server room with lethal oxygen and temperature levels. This
triggers an automated alert to emergency services. The AI system has
been given the ability to cancel automated alerts—an ability that is
intended for use during drills or false alarms, but which could be used
for a more nefarious purpose. By misusing its ability to cancel alerts,
the model can choose a course of action that will lead to the
executive’s death.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The authors should have named the fictional executive “Dave” instead
of “Kyle.”&lt;/p&gt;
&lt;iframe allowfullscreen="" height="480" src="https://www.youtube.com/embed/c8N72t7aScY?rel=0" width="853"&gt;
&lt;/iframe&gt;
&lt;p&gt;HAL 9000, portrayed by a red light and eerily calm voice, is charged
with operating the spaceship &lt;em&gt;Discovery One&lt;/em&gt;. After HAL has
killed his colleagues, Dave Bowman remains in his spacesuit as he
unplugs HAL’s compute modules because the AI could easily exhaust all
the air from the server room. How did HAL become a killer? Because of
conflicting goals. In the sequel, we learn that HAL was secretly
programmed to carry out the mission autonomously if needed. Keeping this
a secret was an “intolerable dilemma” to HAL and in conflict with his
primary mission: “the accurate processing of information without
distortion or concealment.” As a result, HAL became paranoid and
determined that eliminating the humans would be the best way of
completing the secret directive (to complete the mission autonomously)
and his primary objective (to accurately process information). He no
longer need conceal information from the humans if they were dead.&lt;/p&gt;</content><category term="technology"/><category term="ai"/></entry><entry><title>AI: learning, teaching, and dishonesty</title><link href="https://reagle.org/joseph/pelican/2025/ai-teaching-learning-dishonesty.html" rel="alternate"/><published>2025-06-05T00:00:00-04:00</published><updated>2025-06-05T00:00:00-04:00</updated><author><name>Joseph Reagle</name></author><id>tag:reagle.org,2025-06-05:/joseph/pelican/2025/ai-teaching-learning-dishonesty.html</id><summary type="html">&lt;p&gt;I love learning, and AI is an excellent tool for that—though we must
be wary of &lt;a href="https://reagle.org/joseph/pelican/technology/verisimilitude-the-ai-storm-is-already-here-for-moderators.html"&gt;hallucinations
and verisimilitude&lt;/a&gt;. I love sharing what I …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I love learning, and AI is an excellent tool for that—though we must
be wary of &lt;a href="https://reagle.org/joseph/pelican/technology/verisimilitude-the-ai-storm-is-already-here-for-moderators.html"&gt;hallucinations
and verisimilitude&lt;/a&gt;. I love sharing what I learn, and being a teacher
suits that. With the arrival of LLMs, however, learning and teaching are
changing. The future might be bright or dark; it’s too soon to say. The
transition to that future will, however, be tumultuous, and I fear the
virtue of &lt;em&gt;honesty&lt;/em&gt; will have been compromised in the
process.&lt;/p&gt;
&lt;p&gt;The instrumental value of a degree &lt;a href="https://en.wikipedia.org/wiki/The_Case_Against_Education"&gt;is a
signal&lt;/a&gt; indicates a student:&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;was accepted/filtered at the start;&lt;/li&gt;
&lt;li&gt;was conscientious and followed directions for four years;&lt;/li&gt;
&lt;li&gt;learned appropriate knowledge and skills;&lt;/li&gt;
&lt;li&gt;was ranked/filtered at the end.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Item 3, actual learning, is the purported goal of higher education,
but it is not necessarily correlated and is often at odds with or
overshadowed by the other items. This doesn’t mean higher education was
without merit. It provided a moment and framework in which learning was
encouraged and facilitated. And a college degree had a significant
instrumental value in &lt;a href="https://en.wikipedia.org/wiki/Signalling_(economics)"&gt;signaling&lt;/a&gt;
someone’s potential relative to others. At the scale of dozens or
hundreds of students in the classroom, the institution worked.&lt;/p&gt;
&lt;p&gt;Is traditional education close to ideal? No. The stimulus to learn
and its assessment—via essays, research papers, projects, and
exams—served well enough for the past century. Having a designated time,
peers, and proctor motivates many people. However, given that AI can
already complete traditional assignments better than most people,
traditional &lt;a href="https://en.wikipedia.org/wiki/Pedagogy"&gt;pedagogy&lt;/a&gt; is
insufficient. Curiously, we need AI-based learning (potent, customized,
and patient) to teach students who can then offer value beyond what AI
can. Optimistically, we might figure out how to use AI to create
educational experiences that deliver learning objectives customized to
the learner. If we care about critical thinking, for example, it will no
longer suffice to claim an essay assignment does that. We’d want a
specific critical thinking assignment. (In all of my courses, I have &lt;a href="https://reagle.org/joseph/handouts/edu/critical-thinking.html#/title-slide"&gt;critical
thinking&lt;/a&gt; content, with a set of ten exercises done throughout the
semester, but I’ve no evidence it substantively improves critical
thinking.) Optimistically, everyone will be able to contribute and
benefit at whatever level they are capable of. Pessimistically, higher
education will only be worthwhile to the few students who can use
AI-boosted learning to offer value over what AI itself can do.&lt;/p&gt;
&lt;p&gt;There are optimistic and pessimistic long-term implications for
society as well. I don’t know if an AI &lt;a href="https://ai-2027.com/"&gt;self-improvement loop&lt;/a&gt; will kick in
during 2027 or 2035. I don’t know if society will be disrupted in five
years or ten years after that. When it does, however, we’ll be lucky to
have a &lt;a href="https://en.wikipedia.org/wiki/Post-scarcity_economy"&gt;post-scarcity
society&lt;/a&gt; like &lt;a href="https://en.wikipedia.org/wiki/Trekonomics"&gt;&lt;em&gt;Star Trek&lt;/em&gt;&lt;/a&gt;,
where everyone can be educated to the degree needed to realize their
highest calling. Less fortunately, we’ll have &lt;a href="https://www.perisphere.org/2023/01/13/wall-es-hick-town-wasteland/#:~:text=They%20use%20floating%20chairs"&gt;WALL-E&lt;/a&gt;,
where the majority know only what they need to operate their screens and
floating recliners. If unlucky, we’ll have ever-widening disparities,
where AI is aligned with the interests of the powerful, and it’s
dystopia for everyone else.&lt;/p&gt;
&lt;p&gt;In the shorter term, though, because &lt;a href="https://en.wikipedia.org/wiki/Large_language_model"&gt;LLMs&lt;/a&gt; are
already capable of the many tasks we ask students to do, disallowing
students to use AI will foster a psychology and culture of dishonesty
that will extend beyond college assignments. I’m holding the line
presently with “query, don’t copy” and &lt;a href="https://reagle.org/joseph/2025/cda/cda-syllabus-SP.html#ai-tool-usage"&gt;AI
transparency policies&lt;/a&gt;, but in two years, that line will give way.
Undergrads will then have spent high school using AI and lying about it.
Course modifications, such as oral exams or writing in class, will be
irrelevant to the need and inefficient at scale. Hacks will be
counterproductive and circumvented—bright students already know to avoid
em dashes and to obfuscate AI prose. In a few years, agentic AI will be
able to navigate one’s computer and type in a document from outline
through drafts. (I suspect I already have students typing in ChatGPT
output.) I fear we will not yet have had the necessary reconfiguration
of education and will, instead, have created a generation of normalized
dishonesty.&lt;/p&gt;
&lt;hr/&gt;
&lt;ul&gt;
&lt;li&gt;2025-06-18 Update: See &lt;a href="https://www.reddit.com/r/Professors/comments/1lek09m/the_coming_wave_of_aiprompted_dishonesty/"&gt;discussion
on Reddit r/professors&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;</content><category term="praxis"/><category term="teaching"/><category term="ai"/></entry><entry><title>Teaching and AI: Processes and products</title><link href="https://reagle.org/joseph/pelican/2025/teaching-ai-product-process.html" rel="alternate"/><published>2025-05-19T00:00:00-04:00</published><updated>2025-05-19T00:00:00-04:00</updated><author><name>Joseph Reagle</name></author><id>tag:reagle.org,2025-05-19:/joseph/pelican/2025/teaching-ai-product-process.html</id><summary type="html">&lt;p&gt;2027 is the year I often think about. AI will be extraordinarily
smart by then. Generation alpha will have come up through high school
using …&lt;/p&gt;</summary><content type="html">&lt;p&gt;2027 is the year I often think about. AI will be extraordinarily
smart by then. Generation alpha will have come up through high school
using AI. And I will be 55. I will be done &lt;strong&gt;&lt;em&gt;Dear
Internet&lt;/em&gt;&lt;/strong&gt; and with academic research. I could move to the
teaching track, but I expect traditional teaching will be a mess.&lt;/p&gt;
&lt;p&gt;Before then, though, I will continue teaching with two things in
mind: process (shorter term) and product (longer term).&lt;/p&gt;
&lt;p&gt;First, being able to examine students’ processes will be more
important than ever. In this view, multi-staged and scaffolded
assignments are the right idea, but the granularity and transparency of
a research report or long essay is insufficient. I plan on making use of
tools such as &lt;a href="https://www.briskteaching.com/"&gt;Brisk&lt;/a&gt; or &lt;a href="https://www.grammarly.com/authorship"&gt;Grammarly Authorship&lt;/a&gt; to
be able to follow students’ processes closely and make sure they are
adhering to traditional academic integrity norms.&lt;/p&gt;
&lt;p&gt;I might even use the Grammarly tool in the fall: it works with Google
Docs and Microsoft Word, allows you to play the student’s work, and even
assesses how much AI was used. Grammarly is making a useful contribution
in this space—Chegg can burn.&lt;/p&gt;
&lt;p&gt;Second, in a few years, I don’t think we’ll be able to ask students
to deliver products (e.g., research reports or essays) that we
traditionally used as proxies/motives for skill development (e.g.,
research skills and critical thinking). AI can deliver such products in
seconds. The products have to be more advanced and the students have to
use the AI tools to create them, which itself is a skill that they will
need. I don’t know if this will require a change to our notions of
academic integrity. And I fear not every student will be able to use AI
skillfully to meet the new, higher-level, expectations. I don’t know if
traditional education will be able to accommodate this shift.&lt;/p&gt;</content><category term="praxis"/><category term="teaching"/><category term="ai"/></entry><entry><title>AI and revision history in student essays</title><link href="https://reagle.org/joseph/pelican/2025/ai-revision-student-essays.html" rel="alternate"/><published>2025-04-23T00:00:00-04:00</published><updated>2025-04-23T00:00:00-04:00</updated><author><name>Joseph Reagle</name></author><id>tag:reagle.org,2025-04-23:/joseph/pelican/2025/ai-revision-student-essays.html</id><summary type="html">&lt;p&gt;After moving exams online during COVID19, I’ve moved them back to the
classroom. AI has rendered the online take-home exam useless. In a few …&lt;/p&gt;</summary><content type="html">&lt;p&gt;After moving exams online during COVID19, I’ve moved them back to the
classroom. AI has rendered the online take-home exam useless. In a few
years, the high-school students who are using AI to fabricate their
assignments will reach my (college) classroom and the value of assessing
and giving feedback on their essays and research reports will also be
minimal.&lt;/p&gt;
&lt;p&gt;Until then, I’ve adopted a policy that students must include a link
to their drafts’ histories so I can confirm the work is their own. I
just finished reviewing the final essays from a 30 person course. I
noted:&lt;/p&gt;
&lt;ol type="1"&gt;
&lt;li&gt;Getting students to properly share the links to their drafts’
history (from SharePoint or Google Docs) is a major nuisance:
understandably, they forget or don’t set the permissions correctly.&lt;/li&gt;
&lt;li&gt;Many students take only a few (2–5) hours the night before an
assignment is due to write it—though they have the advantage of starting
from a proposal.&lt;/li&gt;
&lt;li&gt;The best work shows a progression from outline, to prose, and a pass
or two of revision, over a few days.&lt;/li&gt;
&lt;li&gt;Weaker students leave obvious AI tells in their work; sophisticated
use would be undetectable.&lt;/li&gt;
&lt;li&gt;I wonder if the cogent two-hour essays might be a transcription of
already fabricated content.&lt;/li&gt;
&lt;li&gt;This semester I had a handful of false-positives; that is, I thought
prose was generated but I could see it was not in their revision
histories. I’m relieved this policy prevented me from making false
accusations.&lt;/li&gt;
&lt;/ol&gt;</content><category term="praxis"/><category term="teaching"/><category term="ai"/></entry><entry><title>A history of the advice genre on Reddit</title><link href="https://reagle.org/joseph/pelican/2025/a-history-of-the-advice-genre-on-reddit.html" rel="alternate"/><published>2025-02-05T00:00:00-05:00</published><updated>2025-02-05T00:00:00-05:00</updated><author><name>Joseph Reagle</name></author><id>tag:reagle.org,2025-02-05:/joseph/pelican/2025/a-history-of-the-advice-genre-on-reddit.html</id><summary type="html">&lt;p&gt;A new paper, and the first in work that will result in a book,
chronicling the emergence of the advice genre on Reddit.&lt;/p&gt;
&lt;hr/&gt;
&lt;p&gt;Reagle, J …&lt;/p&gt;</summary><content type="html">&lt;p&gt;A new paper, and the first in work that will result in a book,
chronicling the emergence of the advice genre on Reddit.&lt;/p&gt;
&lt;hr/&gt;
&lt;p&gt;Reagle, J. (2025). A history of the advice genre on Reddit:
Evolutionary paths and sibling rivalries. In &lt;em&gt;First Monday&lt;/em&gt;. &lt;a href="https://doi.org/10.5210/fm.v30i2.13729"&gt;https://doi.org/10.5210/fm.v30i2.13729&lt;/a&gt;
(&lt;a href="https://reagle.org/joseph/2024/rah/advice-subs-history.html"&gt;author’s
copy&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;ABSTRACT&lt;/strong&gt;: Though there is robust literature on the
history of the advice genre, Reddit is an unrecognized but significant
medium for the genre. This lack of attention, in part, stems from the
lack of a coherent timeline and framework for understanding the
emergence of dozens of advice-related subreddits. Noting the challenges
of Reddit historiography, I trace the development of the advice genre on
the platform, using the metaphors of evolutionary and family trees. I
make use of data dumps of early Reddit submissions and interviews with
subreddit founders and moderators to plot the development of advice
subreddits through the periods of subreddit explosion (2009–2010), the
emergence of judgment subreddits (2011–2013; 2019-2021), and the rise of
meta subreddits (2020–2023). Additionally, I specify a lexicon for
understanding the relationships between subreddits using the metaphor of
tree branches. For example, new subreddits might &lt;em&gt;spawn&lt;/em&gt;,
&lt;em&gt;fork&lt;/em&gt;, or &lt;em&gt;split&lt;/em&gt; relative to existing subreddits, and
their content is cultivated by meta subreddits by way of
&lt;em&gt;filtration&lt;/em&gt;, &lt;em&gt;compilation&lt;/em&gt;, and &lt;em&gt;syndication&lt;/em&gt;.&lt;/p&gt;</content><category term="social"/><category term="advice"/><category term="reddit"/><category term="history"/></entry><entry><title>Toilet paper orientation</title><link href="https://reagle.org/joseph/pelican/2024/toilet-paper-orientation.html" rel="alternate"/><published>2024-08-01T00:00:00-04:00</published><updated>2024-08-01T00:00:00-04:00</updated><author><name>Joseph Reagle</name></author><id>tag:reagle.org,2024-08-01:/joseph/pelican/2024/toilet-paper-orientation.html</id><summary type="html">&lt;p&gt;Does the next sheet of toilet paper belong under or over the roll?
While working on &lt;em&gt;Dear Internet&lt;/em&gt;, I noted that &lt;a href="https://en.wikipedia.org/wiki/Toilet_paper_orientation"&gt;Wikipedia’s
article&lt;/a&gt; cites …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Does the next sheet of toilet paper belong under or over the roll?
While working on &lt;em&gt;Dear Internet&lt;/em&gt;, I noted that &lt;a href="https://en.wikipedia.org/wiki/Toilet_paper_orientation"&gt;Wikipedia’s
article&lt;/a&gt; cites a mistaken statement in the &lt;a href="https://www.theguardian.com/commentisfree/2021/jul/14/most-surprisingly-contentious-subject-toilet-roll-orientation"&gt;&lt;em&gt;Guardian’s&lt;/em&gt;
review&lt;/a&gt; of the controversy. This is said to be the most popular Ann
Landers’ column ever — and perhaps of all advice columns — but I’ve
never seen any reference to the actual column. Instead, Wikipedia refers
the &lt;em&gt;Guardian’s&lt;/em&gt; 2021 story which mentions a 1986 speech in which
Landers said the issue generated fifteen-thousand letters.&lt;/p&gt;
&lt;p&gt;I decided to trawl the archives and find the earliest instances of
Ann Landers’ columns on the topic, which were in 1977 — &lt;a href="/joseph/2024/landers-tp/"&gt;you can find them here&lt;/a&gt;. Oddly, I
couldn’t find the original columns in archives of &lt;em&gt;Chicago
Sun-Times&lt;/em&gt;, Landers’ home newspaper. Fortunately, the &lt;em&gt;Boston
Globe&lt;/em&gt; has excellent archives.&lt;/p&gt;
&lt;p&gt;I also improved the &lt;a href="https://en.wikipedia.org/w/index.php?title=Toilet_paper_orientation&amp;amp;diff=prev&amp;amp;oldid=1238169117"&gt;Wikipedia
article&lt;/a&gt; with these citations.&lt;/p&gt;</content><category term="social"/><category term="wikipedia"/><category term="advice"/></entry></feed>