Research Collaboration

Help validate a free cognitive assessment platform

Neuropsych is a browser-based cognitive battery with 29 tests across 7 domains. It's free, open access, and built on published paradigms. What it doesn't have yet is formal validation data, and that's where we need help.

Cristian Dominguez Rein-Loring

Cristian Dominguez Rein-Loring

Creative Technology Lead, Meta Platforms

I was diagnosed with dyscalculia as an adult, and the experience got me interested in how cognitive assessment actually works. Neuropsych started as a passion project: I wanted to see if I could build a serious, research-grounded battery using modern web technology.

My background is in creative technology, not clinical psychology. The platform works and people are using it, but I want the science behind it to be solid. That's where you come in.

LinkedIn · Website
What we need

Three studies that would change everything

These are the validation gaps standing between a promising tool and a credible one. Each study is self-contained and could be run by a single lab or graduate student with IRB access.

1

Convergent validity against established batteries

Do our tests measure what they claim to measure? The most direct way to find out is to administer our battery alongside gold-standard instruments (WAIS-V, D-KEFS, Conners CPT-3) to the same participants and correlate the results.

What you'd do: Recruit 80 to 100 adults, administer both batteries in counterbalanced order, and analyze correlations across matched subtests. We provide the digital battery, raw data export, and co-authorship.

What it would prove: That our browser-based implementations produce scores consistent with lab-administered equivalents. A correlation of r ≥ .60 across domains would establish the platform as a viable screening instrument.

2

Test-retest reliability

If someone takes the same tests two weeks apart, do they get similar scores? Without temporal stability, individual results are meaningless and the platform can't be used for tracking change over time.

What you'd do: Administer the full battery twice to 50+ participants at a 2 to 4 week interval. We handle recruitment infrastructure if you provide the participant pool. ICC analysis across all subtests.

What it would prove: That scores are stable enough to reflect real cognitive ability rather than noise. An ICC of ≥ .80 is the standard threshold for clinical utility.

3

Clinical discrimination for ADHD and dyscalculia

Can the platform distinguish between people with documented ADHD or dyscalculia and neurotypical controls? This is the study that would make Neuropsych useful as an actual screening tool.

What you'd do: Recruit three groups of 30+ participants each (ADHD-diagnosed, dyscalculia-diagnosed, and neurotypical controls). Administer the battery and run ROC analysis to determine screening thresholds.

What it would prove: Whether the platform can reliably flag individuals who should seek formal clinical evaluation. An AUC of ≥ .80 would support its use as a first-pass screening instrument.

Who can help

If any of this sounds like your work

You don't need to commit to a full study to be useful. Even a conversation about methodology or a pointer to the right literature would help.

University researchers

Psychology, neuropsychology, or educational psychology faculty with IRB access and participant pools. Especially useful for convergent validity and normative studies.

Clinical neuropsychologists

Access to patients with documented ADHD, dyscalculia, or other cognitive profiles is essential for the clinical discrimination study.

School psychologists

Interested in piloting a free digital screening tool as a pre-referral instrument? The dyscalculia and ADHD screening paths are designed for exactly this use case.

Doctoral students

Looking for dissertation data? The platform provides a complete, scalable paradigm for normative or validation studies, and we'll support you with data access and co-authorship.

What we provide

Your research, our infrastructure

Raw data access

Full anonymized trial-level data in CSV and JSON, with millisecond timing on every response, ready for your own statistical pipeline.

Co-authorship

Lead authorship on validation publications for principal investigators. We want this research published, not gated.

IRB and ethics support

We provide consent templates, data handling protocols, and GDPR-compliant infrastructure so you can focus on the research, not the paperwork.

Platform flexibility

We can adjust, add, or modify tests to match your experimental protocol, including custom battery configurations for specific research designs.

Spanish-speaking populations

We are particularly looking for collaborators in Spain and Latin America

Eight of our tests are already referenced against the NEURONORMA project (Peña-Casanova et al., 2009–2013, n=1,365), the gold-standard normative dataset for Spanish adults. We want to extend that coverage to the full battery, especially the dyscalculia and ADHD screening tests.

Our dyscalculia battery is grounded in constructs from the Numerical Cognition Lab at the University of Málaga, including magnitude comparison, transcoding, place-value processing, and arithmetic fluency. If you work in numerical cognition, learning difficulties, or adult cognitive assessment in Spanish-speaking populations, we would especially welcome your involvement.

Reach out

If any of this sounds like something you'd want to be part of, we'd like to hear from you.

or email [email protected] directly

Message received

Thanks for reaching out. We'll get back to you within a few days.