Summary
Creatine is a naturally occurring compound found in the human body. As a supplement, it’s widely considered to be effective at boosting athletic performance, and there is some evidence that creatine can also act as a cognitive enhancer. I conducted a two-month-long self-blinded, placebo-controlled experiment to see if creatine supplementation (5 grams daily, taken in the morning) would improve my reaction time (RT). I chose creatine because it’s known to be safe, and particularly because I’m probably deficient in it due to my longstanding vegan diet. Someone starting from a deficient state might be expected to experience a larger effect from supplementation. I chose RT as a metric because multiple studies have demonstrated a correlation between RT and cognitive ability. I found that my RT was slightly faster when I was taking creatine vs. placebo, though I haven’t yet performed the necessary analysis to determine whether the effect is significant (I’m open to suggestions on the proper analysis to perform here). This research project is part of my long-term goal to refine and validate reaction-time testing as a tool for assessing the cognitive benefits of various interventions.
Study design and results
My study was divided into six periods of ten days each, beginning with a baseline phase and concluding with a washout period. Between these bookends, the design consisted of a placebo period followed by creatine supplementation, and then another placebo/creatine pair. The capsules I took each morning were disguised so that I didn’t know whether I was taking the creatine or the placebo. The creatine dose was in the form of ten 500mg capsules (for a total of 5 grams of creatine monohydrate) sourced from Life Extension. The placebo capsules contained allulose and were made to be indistinguishable from the creatine capsules. I measured my reaction time three times each day, roughly in the late morning, mid-afternoon, and evening.
The following graphs show top-level results and include means from all 176 testing sessions (I missed four sessions out of a planned 180). As described in more detail further below, each testing session took about three minutes and consisted of 35 individual trials that measured my reaction time in response to a stimulus presented on the computer.
In the two graphs below, each data point represents the mean RT from a testing session. The shorter lines represent linear regression fits to the data in each phase, and the longer green line is a linear regression fit to all the data points. Note that lower values are better in the sense that they represent faster reaction times.

I’ve made the data available as a CSV file, aggregated by testing session. The raw data (including each individual trial, but without incorrect responses) is here.
The graph below depicts the same data as above but without the regression lines. Instead, the solid green line represents a locally estimated scatterplot smoothing (loess) curve fitted to all the data points:

The following table summarizes the mean RTs grouped by study phase (with a global mean included on the last line of the table):
Study Phase | Mean RT (ms) | # of Observations |
---|---|---|
Baseline | 313.4 | 28 |
Placebo 1 | 313.4 | 30 |
Creatine 1 | 311.8 | 30 |
Placebo 2 | 313.9 | 29 |
Creatine 2 | 312.9 | 29 |
Washout | 313.4 | 30 |
All phases | 313.1 | 176 |
So… did the creatine actually have an effect?
Good question, and one that still needs to be decided. The graphs above depict results that seem consistent with a small effect of creatine. I’m not proficient in statistics, and I don’t know what type of statistical analysis should be done to determine whether the effect is larger than what one might expect from chance. Also, in retrospect, the study phases should probably have continued longer than ten days. If I do a follow-up study on creatine, I will run it longer (though how much longer is another question that should be considered).
How was reaction time measured?
I measured my reaction time using a Windows app that I wrote. The script is open source under the MIT license, and you can download it here. The specific testing protocol was developed by the late psychologist (and noted self-experimenter) Seth Roberts. Each testing session consists of 35 separate trials, where the app displays a random digit between 2 and 8, inclusive. The subject’s task is to hit the corresponding key as quickly as possible. Incorrect responses are not counted, and each digit is presented five times during a session. Each testing session takes approximately three minutes.
This type of protocol is known as a “choice reaction time test”, meaning that the experimental subject is required to select and execute the correct response from multiple options when presented with various stimuli. This type of test is often used to evaluate decision-making speed and cognitive processing abilities.
Why measure RT?
There is a well-established correlation between RT and measures of cognitive ability, including notably IQ. These correlations stem from studies where the RT and cognitive measures are averaged across populations of subjects. It’s perhaps less established that RT can be used to track changes in cognitive performance within a single individual over time, but it seems plausible – and Roberts theorized that this was a valid approach. His own “n=1” experiments further reinforced his view.
Before his untimely death in 2014, Seth Roberts was promoting the idea that reaction time could be used by self-experimenters to track their brain health and to test the effects of various interventions. His plan was to help establish a distributed community of people who would all conduct self-experiments using RT as a metric. I collaborated with him on studies of caffeine, soy, and flaxseed oil, but I stopped working on it after he died. However, after a long hiatus, I was prompted to revisit this project after I read an interesting post on Scott Alexander’s Substack page.
Alexander posted a long list of putative nootropics ranked by their effectiveness, as gauged by a survey of users. Asking people, “Did substance X improve your cognitive performance?” may produce some useful leads on what works and doesn’t work, but my goal was to find an assessment method that is more objective, where changes in reaction time can be used to estimate differences in effectiveness.
Have other self-experimenters done studies on RT?
Yes, they have. Juraj Karpis, Richard Sprague, and Eri Gentry are three other self-experimenters who worked with the late Seth Roberts and performed studies using his RT metric.
Juraj tested the effects of eating two cans of sardines in the morning, and he found a small improvement in his RT. He also investigated the effect of a vegan diet and observed a worsening in his RT. An earlier, similar diet experiment also demonstrated the same effect (the linked page is written in Slovak, but your browser can probably translate it automatically). Note that the RT data is presented in Section 5, about halfway down the page. Juraj also used his RT test in a blinded experiment of caffeinated coffee as compared to decaffeinated. He found that that the caffeinated coffee made him faster.
Richard found an intriguing effect of simvastatin (a drug to lower cholesterol). And, similar to Juraj’s findings, Richard determined that fish oil makes him faster on the RT test. Richard also has a video where he talks about his RT testing.
Eri Gentry, Greg Biggers, and their collaborators organized a crowdsourced study (titled “Butter Mind”) to examine the effects of butter consumption on reaction time. A total of 65 participants completed the study. Gentry discusses the origin of the study in this video, and the results are outlined on the Quantified Self website (retrieved from the Internet Archive). Briefly, the consumption of butter was associated with a significant improvement in RT scores, as compared to no intervention or consuming coconut oil.
I performed earlier self-experiments on caffeine (made me faster), soy consumption (no effect) and flaxseed oil (also no effect).
Instead of measuring RT, why not track productivity, ability to concentrate, or other more direct metrics of cognitive performance?
I actually tried doing this, but I found it surprisingly difficult. For a period of about three weeks, I would sit down at the end of the day and rate myself on my ability to concentrate and on my productivity that day. I used a scale of one to ten. I discovered that it was hard to come up with a number. I’d reflect on various events from the day, and my thinking would go something like this: “Well, I did complete that one project at work, but then I also forgot (again) to make a dentist appointment. And I got caught daydreaming in a boring meeting. On the other hand, I finally figured out the subtle bug in the Python script I’ve been developing”. In short, most days were about the same overall (with some positive achievements as well as some areas where I was deficient), and the process of assigning ratings seemed arbitrary and highly subjective. I gave up.
How was the self-blinding done?
I bought some large, empty, opaque capsules from Amazon, and I placed the creatine capsules inside the larger ones. The outer capsules were packed with a small amount of inert microcrystalline cellulose to immobilize the inner creatine capsule (otherwise, it would bounce around inside).
The placebo capsules were filled with allulose, a low-calorie sweetener that I use in my morning coffee. I originally tried to use microcrystalline cellulose, but it was very lightweight, and I could easily tell the difference between an active capsule and the placebo. The capsules filled with allulose were roughly the same weight as the creatine-containing ones. The placebos and the active capsules were placed in separate containers with folded-up notes inside that identified the contents. My wife assisted in randomizing the containers so that I didn’t know which one was which.
Why study creatine?
[This part still remains to be written.]
More about self-experimentation and personal science
- The late experimental psychologist Seth Roberts was one of the pioneers and popularizers of self-experimentation. Here is a video of him giving a talk on how he learned to improve his sleep and his mood through self-experimentation.
- Seth Roberts died in 2014. His blog isn’t available online anymore, but you can view it through the Wayback Machine. Also, this large PDF file contains a complete record of all his blog posts and the comments on those posts.
- The Slime Mold Time Mold bloggers discuss n=1 experiments
- Adam Mastroianni: “An invitation to a secret society“
- Richard Sprague’s Personal Science Substack site
- Gary Wolf is working on a book titled, Personal Science: Learning to Observe . It doesn’t seem to be available yet, but Richard Sprague reviewed a draft of it.
- The Personal Science Wiki
- Anonymous blogger Dynomight conducted a blinded 16-month self-experiment on the effects of theanine. He posted a link to Hacker News, where a lively discussion ensued.
- David Anekstein used an app called Reflect (Anekstein wrote the app) to do some self-experiments on coffee
- Examine is a website that evaluates and condenses scientific studies to identify effective supplements and non-drug treatments.
More links coming soon.
Seeking collaborators
If any of this sounds interesting to you, I’d love to collaborate! Please contact me. I’m especially interested in hearing from people who are proficient in statistical analysis, though I’m happy to work with anyone.