There are 2 new exciting features in BrainSpell: the edition of coordinate tables and the possibility of manually adding new articles to BrainSpell’s database.
Previously, articles and coordinate tables strictly reflected those parsed by NeuroSynth, and it was only possible to flag them as correct or incorrect. Now, if a table has incorrect or missing data, you can fix it! Once you are logged all the table fields become editable (just click on them), and the tables display -/+ icons beside each row to delete an incorrect row or add a missing one (see Fig. 1). In addition, the ‘Title’ and ‘Caption’ fields of each table are also editable.
It is also possible now to manually add articles to BrainSpell’s database. To add a new article, find in in PubMed and copy it’s PubMed ID. For example, we will add the article by Shepherd et al (2014), PubMed ID 25268788. Now, enter the url http://brainspell.org/article/ followed by the PubMed ID of your article; in our case: http://brainspell.org/article/25268788. When BrainSpell realises that the article is not in the database, it will pull its metadata (authors, title, abstract, tags, etc) from PubMed (Fig .2).
If you are logged in, it will ask you whether you want to create a new entry by adding the number of stereotaxic coordinate tables present in the article, in our case 2 (Fig. 3).
The experiments have empty titles and captions, and a coordinate at X=0, Y=0 and Z=0 is present by default, with its corresponding sphere in the translucent brain (Fig. 4).
You can edit all these fields, titles, captions and coordinates, to enter the complete table as in Fig. 5.
The functionalities are still very new, and may be bugs lurking around. Please let us know if you find anything, using the ‘issues’ tracker in github at http://github.com/r03ert0/brainspell.
Now in the translucent brains showing the stereotaxic coordinates for the experiments in a paper you can click on a red sphere to highlight the corresponding coordinate row in the table, or vice versa, click on a row to highlight the corresponding sphere.
The code I used is based on the tutorial published by Soledad Panés in her excellent blog: http://soledadpenades.com/articles/three-js-tutorials/object-picking.
Daniel Margulies from the Neuroanatomy and Connectivity group at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig is organising a Pizza-BrainSpell-Tagging-Sprint! Thank you very much!
When tagging I often focus first on the methodological aspects – check if the tables are correct, find the stereotaxic space, the number of subjects – and then I go for the ontologies. Sometimes it is possible to vote for the MeSH tags (those added by PubMed) just from reading the abstract, but to tag the experiments you’ll have to read the articles’s full text. If you don’t have much time, you can just tag using BrainMap’s ontology which is very small, but Cognitive Atlas’s ontologies are more detailed and can also be edited (at cognitiveatlas.org).
We’ll be also tagging articles at the Institut Pasteur. In case any one needs help or has suggestions for improvement, I have opened this chat room that I’ll be checking often:
I have just uploaded a new update of brainspell. There are several new things:
- Now when you search for a term, a simple stereotaxic brain viewer shows the locations corresponding to your search. The first time you use it the viewer may take some time to load the brain anatomy used for reference (Colin27), but afterwards it will be stored locally in your browser’s cache and be much faster.
- I added more fields for gathering data about the articles in the database. There’s one field for entering the stereotaxic space (either MNI or Talairach), and another one to enter the total number of subjects used in the article (this information is quite hard to get using automatic text-mining alone…). There is also a field bellow each table to flag incorrectly parsed data. This may be a table with repeated coordinates, or displaying numbers which are actually not stereotaxic coordinates.
- Finally, at the end of each paper there is a new Discussion section, where you can better explain your choices, our discuss someone else’s tags.
I recently discovered how to make screencasts, and made these 2 youtube videos, one showing the brainspell’s GUI (a previous version… I’ll have to record a new one), and the other showing the Brain Coactivation Map Viewer. If you like brainspell, you may want to check the CoactivationMap.app. It uses all the same data of brainspell (kindly provided by Tal Yarkoni’s excellent neurosynth web app) to create an interactive coactivation map of the human brain. As you browse through the different brain regions, you will see the corresponding coactivation networks, and a list of the MeSH tags whose associated network most closely match the current one. Clicking on those tags will launch a query in brainspell showing the corresponding articles. Here are the videos:
Quick intro to brainspell
Quick intro to the CoactivationMap.app
Now articles present a link to the full text version in the journal’s website (Additionally, several bugs have been corrected – from the many that may still lurk around)
First functional version of brainspell is now on-line, taking queries, user registrations, and tags!
First version of the database is also available for download.
First round of beta testing (with mostly “sandbox” tagging).
In summary, the first day of our open, human curated, classification of the neuroimaging literature!