あなたのインテリジェントリサーチアシスタントに会いましょう

Jenniは、研究者が読み、書き、引用できるAIワークスペースです。すべての主張は出典まで追跡できます。

HC

HC

HC

600万人以上の学者に愛されている

あなたのインテリジェントリサーチアシスタントに会いましょう

Jenniは、研究者が読み、書き、引用できるAIワークスペースです。すべての主張は出典まで追跡できます。

HC

HC

HC

600万人以上の学者に愛されている

あなたのインテリジェントリサーチアシスタントに会いましょう

Jenniは、研究者が読み、書き、引用できるAIワークスペースです。すべての主張は出典まで追跡できます。

HC

HC

HC

600万人以上の学者に愛されている

Beyond Detection: A Framework for Ethical AI…
Share
TText
B
I
U
S
x2
x2
@Cite
Autocomplete

Beyond Detection: A Framework for Ethical AI Integration in Academic Research

The proliferation of generative AI in academic contexts has revealed a fundamental truth that institutions have been reluctant to acknowledge:

The detection paradigm has failed.

AI detection tools achieve accuracy rates often below 80% in independent testing (Wakjira et al., 2025). Their false positive rates can be as high as 50% across widely-used platforms (Weber-Wulff et al., 2023). There is also documented systematic bias, with over 61% of non-native English writing flagged as AI-generated (Liang et al., 2023). The current approach of "detect and punish" thus creates more harm than it prevents. Studies indicate that 13.5% to 22.5% of academic papers now show evidence of AI assistance (Kobak et al., 2025).

The path forward requires abandoning unreliable surveillance in favor of transparency architectures: tools and policies designed from inception to make AI contributions visible, auditable, and appropriately constrained.

Part I: The epistemological limits of AI detection

Contemporary AI detection rests on a brittle assumption: that the statistical fingerprints of machine-generated prose remain stable, distinguishable from human writing, and resistant to even modest paraphrase. Each of these premises dissolves under sustained scrutiny. Modern generative systems are trained on the same authoritative corpora that high-quality human writing draws from, and their outputs converge on precisely the registers detectors are calibrated to flag as natural (Sadasivan et al., 2024). The result is a moving target that detectors cannot follow without retraining on every new model generation — a posture that is neither operationally nor epistemologically sustainable.

Empirical work over the past eighteen months has documented this drift in granular detail. When evaluated on out-of-distribution writing — graduate theses, technical manuscripts, translated passages — detector accuracy collapses well below the threshold required for any high-stakes adjudication (Liang et al., 2023; Sadasivan et al., 2024). A meta-analysis of fourteen commercial detectors found a median accuracy of 39.5% on lightly paraphrased text — a figure that is not merely poor but actively misleading. Institutions deploying these systems are operating below the level of a coin flip while presenting their judgments as forensic evidence.

1.1 The base-rate fallacy in detection deployment

Even a hypothetical detector with 95% sensitivity and 95% specificity — performance no current system approaches — produces an unacceptable error rate when applied across populations where undisclosed AI use is rare. If 5% of submissions involve a genuine policy violation, applying such a detector to a class of 400 students correctly flags 19 of the 20 actual cases while wrongly accusing roughly 19 honest students. Real detectors operating below 80% accuracy push the false accusation rate beyond what any educational institution can ethically sustain (Fleckenstein et al., 2024).

These statistical realities are compounded by a recursive contamination problem. As model output increasingly populates the open web, the next generation of detectors trains on a corpus in which human and machine are no longer cleanly distinct categories — they are interleaved, cross-cited, and mutually shaping (Shumailov et al., 2024). Detection at that point ceases to identify a meaningful boundary; it merely reproduces the priors encoded during its last training cycle.

1.2 Disparate impact and the linguistic monoculture

The harms of unreliable detection are not distributed evenly. Independent audits repeatedly show that detectors penalize writers whose first language is not English at rates three to four times higher than native speakers (Liang et al., 2023), and that lower-perplexity prose — the very prose that structured academic training tends to produce — registers as "machine-like" to most commercial models. A system that punishes linguistic care while rewarding idiosyncrasy is not measuring authorship; it is measuring stylistic distance from a narrow Anglophone norm. The pedagogical consequences are severe: students learn to write worse on purpose to evade the detector, inverting every signal a writing program is meant to cultivate.

4,812 words
Peer Review
Run peer review

世界中の大学や企業から信頼されています

世界中の大学や企業から信頼されています

世界中の大学や企業から信頼されています

どのように機能するか
どのように機能するか

白紙のページから引用付きの論文まで、3つのステップで

01

01

ソースをインポート

PDFをドラッグして取り込み、ZoteroまたはMendeleyからインポートするか、Jenniに2億件以上の論文を検索させましょう。ライブラリは数秒で準備完了です。

PDFをドラッグして取り込み、ZoteroまたはMendeleyからインポートするか、Jenniに2億件以上の論文を検索させましょう。ライブラリは数秒で準備完了です。

02

02

AIと一緒に書く

Smart AI のオートコンプリートは、実際の論文に基づいた文章を提案します。提案には出典が示されており、元の情報源までたどれます。

03

03

引用、レビュー、エクスポート

ワンクリックでインライン引用を2,600以上のスタイルで挿入。あらゆる主張を元のPDFで確認できます。.docx、LaTeX、またはHTMLに書き出し。

JENNIが選ばれる理由

JENNIが選ばれる理由

See Peer Review in action

Watch how Jenni reads a real manuscript, scores it against the rubric, and leaves comments where each section needs work.

JENNIが選ばれる理由

JENNIが選ばれる理由

また別のAIチャットボット?

AIツールは何百種類もあります。JenniがChatGPTと違う理由はここにあります。

Reads the full manuscript

Peer Review reads your full draft cover to cover, capturing every claim, every method note, and every transition, so feedback reflects the whole document.

Same criteria reviewers use

Peer Review fills out the same review form top journals use, with scores on soundness, contribution, and presentation plus written feedback.

Comments tied to passages

Jenni anchors every comment to a specific sentence, with a reason and a suggested fix. You know what to change & where, not just that something's off.

新着: レビュー

新着: レビュー
新着: レビュー

査読者より先に弱点を見つける

Reviewsはあなたの論文内のあらゆる主張を分析し、出典を照合し、6つのカテゴリにわたる問題を指摘します。自信を持って、安心して投稿しましょう。
検証されていない、または推測に基づく主張は、査読で却下される最も一般的な原因です。Jenniはそれらを数秒で見つけます。

Peer review8 / 10

Manuscript scored against a peer-review rubric with reviewer comments on each section.

Soundness
3/4
Presentation
4/4
Contribution
3/4
Results
Strengths
Weaknesses
Claim confidence10 issues

The claim confidence analysis addressed issues of redundant, weak, or missing citations, alongside instances of contradiction in citation arguments.

Misrepresented
Contradicted
3
Unsupported
4
Weakly supported
2
Overstated
Unverifiable
Outdated
2
Self-citation heavy
Predatory source
Citation mismatch
1
Proofread18 edits

Whilst generally sound, the text contains some areas for improvement to comply with academic best practices.

Word choice
AllThe majority of participants reported improved outcomes.
Formality
Yang (2024) found a negative correlation which was interesting..
Grammar
These results indicate that early intervention be effective. appears to be effective.
Transitions
Also, In addition, Jones (2022) found similar results.
Overgeneralized
AllThe majority of participants reported improved outcomes.
The results provesuggest that X has an effect on Y.
Tone of voice22 notes

Suggestions across vocabulary, syntax, punctuation, tone and flow to keep a consistent academic voice.

All Suggestions
22
Vocabulary
6
Syntax
5
Punctuation
4
Tone
3
Flow
4

引用分析

学術校正

インラインフィードバック

インラインフィードバック

"The Claim Confidence feature is super useful. It flags any unsupported, overstated, or weakly supported claims."

Sabine Hossenfelder

Physicist & Author of Lost in Math

"The Claim Confidence feature is super useful. It flags any unsupported, overstated, or weakly supported claims."

Sabine Hossenfelder

Physicist & Author of Lost in Math

"The Claim Confidence feature is super useful. It flags any unsupported, overstated, or weakly supported claims."

Sabine Hossenfelder

Physicist & Author of Lost in Math

"I regularly try AI tools for research and have found Jenni the best and easiest to use. Especially for rapdily re-formatting references and developing new paper ideas."

Gareth

Editor-in-chief, Taylor & Francis

"I regularly try AI tools for research and have found Jenni the best and easiest to use. Especially for rapdily re-formatting references and developing new paper ideas."

Gareth

Editor-in-chief, Taylor & Francis

"I regularly try AI tools for research and have found Jenni the best and easiest to use. Especially for rapdily re-formatting references and developing new paper ideas."

Gareth

Editor-in-chief, Taylor & Francis

よくある質問

レビューは無料ですか?

レビューを使用すべき時はいつですか?

引用の提案はどこから来るのですか?

レビューは無料ですか?

レビューを使用すべき時はいつですか?

引用の提案はどこから来るのですか?

レビューは無料ですか?

レビューを使用すべき時はいつですか?

引用の提案はどこから来るのですか?

今日、あなたの最も素晴らしい作品に向けて進展を遂げましょう

今日、Jenniと一緒に最初の論文を書き、決して振り返ることはありません

無料で始めましょう

クレジットカードは不要です

いつでもキャンセルできます

5メートル以上

世界中の学術

5.2時間の節約

1件あたりの平均

1500万以上

ジェニに関する論文

今日、あなたの最も素晴らしい作品に向けて進展を遂げましょう

今日、Jenniと一緒に最初の論文を書き、決して振り返ることはありません

無料で始めましょう

クレジットカードは不要です

いつでもキャンセルできます

5メートル以上

世界中の学術

5.2時間の節約

1件あたりの平均

1500万以上

ジェニに関する論文

今日、あなたの最も素晴らしい作品に向けて進展を遂げましょう

今日、Jenniと一緒に最初の論文を書き、決して振り返ることはありません

無料で始めましょう

クレジットカードは不要です

いつでもキャンセルできます

5メートル以上

世界中の学術

5.2時間の節約

1件あたりの平均

1500万以上

ジェニに関する論文