당신의 지능형 연구 도우미를 만나보세요

Jenni는 연구자들이 읽고, 쓰고, 인용하는 AI 작업 공간으로 — 모든 주장은 출처까지 추적할 수 있습니다.

HC

HC

HC

600만 명 이상의 학자에게 사랑받는

당신의 지능형 연구 도우미를 만나보세요

Jenni는 연구자들이 읽고, 쓰고, 인용하는 AI 작업 공간으로 — 모든 주장은 출처까지 추적할 수 있습니다.

HC

HC

HC

600만 명 이상의 학자에게 사랑받는

당신의 지능형 연구 도우미를 만나보세요

Jenni는 연구자들이 읽고, 쓰고, 인용하는 AI 작업 공간으로 — 모든 주장은 출처까지 추적할 수 있습니다.

HC

HC

HC

600만 명 이상의 학자에게 사랑받는

Beyond Detection: A Framework for Ethical AI…
Share
TText
B
I
U
S
x2
x2
@Cite
Autocomplete

Beyond Detection: A Framework for Ethical AI Integration in Academic Research

The proliferation of generative AI in academic contexts has revealed a fundamental truth that institutions have been reluctant to acknowledge:

The detection paradigm has failed.

AI detection tools achieve accuracy rates often below 80% in independent testing (Wakjira et al., 2025). Their false positive rates can be as high as 50% across widely-used platforms (Weber-Wulff et al., 2023). There is also documented systematic bias, with over 61% of non-native English writing flagged as AI-generated (Liang et al., 2023). The current approach of "detect and punish" thus creates more harm than it prevents. Studies indicate that 13.5% to 22.5% of academic papers now show evidence of AI assistance (Kobak et al., 2025).

The path forward requires abandoning unreliable surveillance in favor of transparency architectures: tools and policies designed from inception to make AI contributions visible, auditable, and appropriately constrained.

Part I: The epistemological limits of AI detection

Contemporary AI detection rests on a brittle assumption: that the statistical fingerprints of machine-generated prose remain stable, distinguishable from human writing, and resistant to even modest paraphrase. Each of these premises dissolves under sustained scrutiny. Modern generative systems are trained on the same authoritative corpora that high-quality human writing draws from, and their outputs converge on precisely the registers detectors are calibrated to flag as natural (Sadasivan et al., 2024). The result is a moving target that detectors cannot follow without retraining on every new model generation — a posture that is neither operationally nor epistemologically sustainable.

Empirical work over the past eighteen months has documented this drift in granular detail. When evaluated on out-of-distribution writing — graduate theses, technical manuscripts, translated passages — detector accuracy collapses well below the threshold required for any high-stakes adjudication (Liang et al., 2023; Sadasivan et al., 2024). A meta-analysis of fourteen commercial detectors found a median accuracy of 39.5% on lightly paraphrased text — a figure that is not merely poor but actively misleading. Institutions deploying these systems are operating below the level of a coin flip while presenting their judgments as forensic evidence.

1.1 The base-rate fallacy in detection deployment

Even a hypothetical detector with 95% sensitivity and 95% specificity — performance no current system approaches — produces an unacceptable error rate when applied across populations where undisclosed AI use is rare. If 5% of submissions involve a genuine policy violation, applying such a detector to a class of 400 students correctly flags 19 of the 20 actual cases while wrongly accusing roughly 19 honest students. Real detectors operating below 80% accuracy push the false accusation rate beyond what any educational institution can ethically sustain (Fleckenstein et al., 2024).

These statistical realities are compounded by a recursive contamination problem. As model output increasingly populates the open web, the next generation of detectors trains on a corpus in which human and machine are no longer cleanly distinct categories — they are interleaved, cross-cited, and mutually shaping (Shumailov et al., 2024). Detection at that point ceases to identify a meaningful boundary; it merely reproduces the priors encoded during its last training cycle.

1.2 Disparate impact and the linguistic monoculture

The harms of unreliable detection are not distributed evenly. Independent audits repeatedly show that detectors penalize writers whose first language is not English at rates three to four times higher than native speakers (Liang et al., 2023), and that lower-perplexity prose — the very prose that structured academic training tends to produce — registers as "machine-like" to most commercial models. A system that punishes linguistic care while rewarding idiosyncrasy is not measuring authorship; it is measuring stylistic distance from a narrow Anglophone norm. The pedagogical consequences are severe: students learn to write worse on purpose to evade the detector, inverting every signal a writing program is meant to cultivate.

4,812 words
Peer Review
Run peer review

전 세계의 대학과 기업들이 신뢰합니다.

전 세계의 대학과 기업들이 신뢰합니다.

전 세계의 대학과 기업들이 신뢰합니다.

하는 방법
하는 방법

빈 페이지에서 인용된 논문까지, 3단계로

01

01

소스를 가져오기

PDF를 드래그해 넣고, Zotero 또는 Mendeley에서 가져오거나, Jenni가 2억 편 이상의 논문을 검색하게 하세요. 라이브러리는 몇 초 만에 준비됩니다.

PDF를 드래그해 넣고, Zotero 또는 Mendeley에서 가져오거나, Jenni가 2억 편 이상의 논문을 검색하게 하세요. 라이브러리는 몇 초 만에 준비됩니다.

02

02

AI와 함께 글쓰기

스마트 AI 자동완성은 실제 논문에 근거한 문장을 제안합니다. 제안에는 출처가 표시되며 원문까지 추적할 수 있습니다.

03

03

인용, 검토, 내보내기

2,600개 이상의 스타일에서 원클릭 인라인 인용. 모든 주장을 원본 PDF와 대조해 확인하세요. .docx, LaTeX 또는 HTML로 내보내기.

왜 JENNI인가

왜 JENNI인가

See Peer Review in action

Watch how Jenni reads a real manuscript, scores it against the rubric, and leaves comments where each section needs work.

왜 JENNI인가

왜 JENNI인가

또 다른 AI 챗봇이 아니야

수백 가지 AI 도구가 있습니다. Jenni가 ChatGPT와 다른 점은 다음과 같습니다.

Reads the full manuscript

Peer Review reads your full draft cover to cover, capturing every claim, every method note, and every transition, so feedback reflects the whole document.

Same criteria reviewers use

Peer Review fills out the same review form top journals use, with scores on soundness, contribution, and presentation plus written feedback.

Comments tied to passages

Jenni anchors every comment to a specific sentence, with a reason and a suggested fix. You know what to change & where, not just that something's off.

신규: 리뷰

신규: 리뷰
신규: 리뷰

검토자가 알아채기 전에 약점을 발견하세요

Reviews는 여러분의 논문에 있는 모든 주장을 분석하고, 출처를 교차 참조하며, 6개 범주의 문제를 표시합니다. 불안이 아니라 자신감을 가지고 제출하세요.
검증되지 않았거나 추측에 불과한 주장은 동료 심사 거절의 가장 흔한 원인입니다. Jenni는 몇 초 만에 이를 찾아냅니다.

Peer review8 / 10

Manuscript scored against a peer-review rubric with reviewer comments on each section.

Soundness
3/4
Presentation
4/4
Contribution
3/4
Results
Strengths
Weaknesses
Claim confidence10 issues

The claim confidence analysis addressed issues of redundant, weak, or missing citations, alongside instances of contradiction in citation arguments.

Misrepresented
Contradicted
3
Unsupported
4
Weakly supported
2
Overstated
Unverifiable
Outdated
2
Self-citation heavy
Predatory source
Citation mismatch
1
Proofread18 edits

Whilst generally sound, the text contains some areas for improvement to comply with academic best practices.

Word choice
AllThe majority of participants reported improved outcomes.
Formality
Yang (2024) found a negative correlation which was interesting..
Grammar
These results indicate that early intervention be effective. appears to be effective.
Transitions
Also, In addition, Jones (2022) found similar results.
Overgeneralized
AllThe majority of participants reported improved outcomes.
The results provesuggest that X has an effect on Y.
Tone of voice22 notes

Suggestions across vocabulary, syntax, punctuation, tone and flow to keep a consistent academic voice.

All Suggestions
22
Vocabulary
6
Syntax
5
Punctuation
4
Tone
3
Flow
4

인용 분석

학술 교정

인라인 피드백

인라인 피드백

"The Claim Confidence feature is super useful. It flags any unsupported, overstated, or weakly supported claims."

Sabine Hossenfelder

Physicist & Author of Lost in Math

"The Claim Confidence feature is super useful. It flags any unsupported, overstated, or weakly supported claims."

Sabine Hossenfelder

Physicist & Author of Lost in Math

"The Claim Confidence feature is super useful. It flags any unsupported, overstated, or weakly supported claims."

Sabine Hossenfelder

Physicist & Author of Lost in Math

"I regularly try AI tools for research and have found Jenni the best and easiest to use. Especially for rapdily re-formatting references and developing new paper ideas."

Gareth

Editor-in-chief, Taylor & Francis

"I regularly try AI tools for research and have found Jenni the best and easiest to use. Especially for rapdily re-formatting references and developing new paper ideas."

Gareth

Editor-in-chief, Taylor & Francis

"I regularly try AI tools for research and have found Jenni the best and easiest to use. Especially for rapdily re-formatting references and developing new paper ideas."

Gareth

Editor-in-chief, Taylor & Francis

자주 묻는 질문

리뷰는 무료인가요?

리뷰는 언제 사용해야 하나요?

인용 제안은 어디에서 오는 것인가요?

리뷰는 무료인가요?

리뷰는 언제 사용해야 하나요?

인용 제안은 어디에서 오는 것인가요?

리뷰는 무료인가요?

리뷰는 언제 사용해야 하나요?

인용 제안은 어디에서 오는 것인가요?

오늘 당신의 가장 위대한 작업에서 진전을 이루세요.

오늘 Jenni와 함께 첫 번째 논문을 작성하고 결코 뒤돌아보지 마세요

무료로 시작하기

신용카드 불필요

언제든지 취소할 수 있습니다

5m 이상

전 세계의 학술 활동

5.2시간 절약됨

논문당 평균

15m 이상

젠니에 관한 논문들

오늘 당신의 가장 위대한 작업에서 진전을 이루세요.

오늘 Jenni와 함께 첫 번째 논문을 작성하고 결코 뒤돌아보지 마세요

무료로 시작하기

신용카드 불필요

언제든지 취소할 수 있습니다

5m 이상

전 세계의 학술 활동

5.2시간 절약됨

논문당 평균

15m 이상

젠니에 관한 논문들

오늘 당신의 가장 위대한 작업에서 진전을 이루세요.

오늘 Jenni와 함께 첫 번째 논문을 작성하고 결코 뒤돌아보지 마세요

무료로 시작하기

신용카드 불필요

언제든지 취소할 수 있습니다

5m 이상

전 세계의 학술 활동

5.2시간 절약됨

논문당 평균

15m 이상

젠니에 관한 논문들