Privacy & Data Architecture
GradientEdu was designed from the first line of code with one non-negotiable constraint: student data never leaves the closed research ecosystem between the researcher, the school, and the teacher.
At the moment of data intake, student identifiers are separated from scores. Names are replaced with anonymous tokens before any data is processed. The AI engine never sees a student's name — only their response patterns.
Assessment data is processed in temporary server memory and deleted within minutes of report generation. Nothing is written to a permanent database. No student record persists beyond the analysis window.
Data flows only between three parties: the teacher who uploads it, the research pipeline that analyzes it, and the school that authorized the pilot. No third-party data sharing. No advertising. No external access.
Most platforms write a FERPA policy after they build the product. GradientEdu's FERPA compliance is the product. The data flow itself makes violations structurally impossible.
The research application is fully behind a login wall with JWT-secured routes. No data, no analysis, no output is accessible without authenticated credentials. Every session is verified independently.
The Anthropic API powers language generation in the pipeline. Student names, scores, and identifiers are stripped before any prompt is constructed. The AI receives anonymized patterns only — never student records.
Every step in the data flow is designed to minimize exposure, strip identifiers early, and leave nothing behind. Here is exactly what happens from upload to output.
The teacher exports a CSV from their gradebook or SIS — rows are students, columns are items. This file is uploaded directly to the secure research server over encrypted HTTPS.
Encrypted in transitThe first operation on every uploaded file is identifier separation. Student names are replaced with anonymous tokens. The name column never enters the analysis pipeline.
Names never reach AIPrincipal Component Analysis and SVD decomposition are applied to the anonymized item response matrix in temporary server memory. All computation happens server-side in an isolated session.
In-memory onlyThe diagnostic report is returned directly to the authenticated teacher. The output contains anonymized profiles that the teacher maps back to their own roster — locally, never on the server.
Teacher-side mapping onlyUpon session completion, all temporary files are deleted from server storage. No student record, no score, no identifier persists. The server retains no trace of the analysis.
Zero retention confirmedThe Family Educational Rights and Privacy Act (FERPA) requires that educational institutions protect the privacy of student education records. Most EdTech platforms achieve FERPA compliance through policy — terms of service, data processing agreements, and privacy notices.
GradientEdu achieves FERPA compliance through architecture. Because student names are stripped before analysis, because data is never written to a persistent database, and because the ecosystem is closed between researcher, school, and teacher — there is no mechanism by which a FERPA violation could occur. The data flow makes it structurally impossible.
This was not a retrofit. It was the design requirement that preceded every other build decision.
GradientEdu is an independent applied research project. The researcher is directly accountable for every data handling decision. If you have questions about how your school's data would be managed in a pilot, reach out.
Data Privacy Inquiry →