Monthly Archives: March 2014

Importing Multiple Access Databases in R

Having trouble importing your Microsoft Access Databases in R? Yeah, I had trouble <i>accessing</i> Access in R as well. After pouring 7/10’s of my soul into a project, it looked like I might have to analyze a dataset (over 100 Access Databases) in Excel, or worse, by hand. Luckily R, as always, came to the rescue. The trick that you’ll need is a library called “RODBC” — it’s a package that will allow Access’s notoriously stubborn ports to open wide for R to come in and grab information.

Below is a sample code for what I’ve been using.   Not all of my conventions are the best, and not all of them will work for you, so I’ll explain what some of these more specific variables do.

NOTE: if you use 32-bit Microsoft Office, you will need to use the 32-bit version of R. I know that seems stupid, but those are the rules!

###Begin Script
###This particular build is for one of our lab's projects, but you can edit however you like.
###This assumes that you store your results as "SubjectName001, SubjectName002, OtherSubject001, OtherSubject002".
###Before running the loop, you will need to change how many sessions you want to run (HowManySessions), when the loop starts (StartSession),
###and the pigeon's name. Make sure you are using the directory that all of your file RESULTS are stored in. No need for sessions!
library(RODBC)
Directory<-"C:\\Users\\Alexander\\Documents\\Projects\\Stimulus Movement\\Results\\"
 FileExtension<-".accdb"
 HowManySessions <- 30
 PigeonName <- "Raphael"
 StartSession <- 4501 ###Highlight from here down, and press CTRL+R to start the loop!
 for (i in 1:HowManySessions){
 SessionString<-paste(PigeonName,StartSession, sep="")
 db <- paste(Directory,SessionString,FileExtension, sep="")
 connection <- odbcConnectAccess2007(db)
 sqlTables(connection, tableType = "TABLE")$TABLE_NAME trial <- sqlFetch(connection, "TrialResults")


###Here's an example of pulling subsets of data out of the database.  
###Calculating accuracy between these two groups of information, and then attaching it to a data frame called "numbers" below.
movetrial = trial[trial$NoTouch==1,]
stattrial = trial[trial$NoTouch==0,]
moveACC = 1 - mean(movetrial$IncorrectCorrections)
statACC = 1 - mean(stattrial$IncorrectCorrections)
numbers = data.frame(PigeonName, StartSession, moveACC, statACC)
###Now, we export it to a comma separated spreadsheet.  Note that column names are turned off in this example!
write.table(numbers,
 file='C:\\Users\\Alexander\\Documents\\Projects\\Stimulus Movement\\Results\\omnibus.csv', row.names=F, col.names=F, append=T, sep=",")
 close(connection)
StartSession <- StartSession + 1
 }
###You can use this command to close all ODBC connections when you're done with your loop.
###odbcCloseAll()
Tagged , , , , , , ,

Scientific Writing and the Illusion of Objectivity

The past few months, my colleagues and I have been working on a manuscript.  We plan to submit it to a journal where it will be, like all other articles, peer-reviewed, commented on, and (hopefully) published.  This is how it works, and it’s how it has almost always worked.

As I have the role of first-author, it has been my role to write up the majority of the manuscript.  Our field demands APA Style formatting.  This style claims to be the most neutral of all formatting styles on the market, and it strongly recommends against using anything that could bias language, such as first-person accounts, descriptive adjectives, or anecdotes.  I get it — this is science, and science should be objective.  Therefore, we ought to write about it in the most objective way possible.  But how objective is the most objective way possible?

Although APA Style, and other forms of scientific writing, aim to be objective, I’m not so sure such a thing can be achieved.  There are a variety of reasons for this, and primary among them is that writing is a naturally subjective experience.  Writing is just a fancy way to commit introspection, and introspection was more-or-less struck down in the early 20th century for being too subjective.  Introspection, as Edward Titchenor used it, was a way for a person to describe their internal machinations.  This line of research fell by the wayside when John B. Watson hit the scene, and he would go on to make some pretty convincing arguments against introspection as a tool of objectivity.  Writing, as it’s normally used, is just a way for people to record their inner monologue, which is influenced by the environment, affect, mood, and personality.

I’d say that scientific writing is about as objective as anything you might find in a history textbook.  Both historical and scientific writing are encouraged to carry no bias, be accurate, etc… but both of these media are reports of past events.  So not only are these past events available for the author’s interpretation (as objective and neutral as she/he may try to be), but they depend on documentation.  Scientists try to record everything in real time — if they manipulate a variable, they write it down.  But what if it’s not written down?  Historians want to report from primary sources — but what if there is no primary source?  These examples lead to, at best, a modest reconstruction of the past based on our best available knowledge, and at worst, guesswork.

We can’t just submit numbers and equations to a journal for publication, which, in my mind, would be the only real way to be purely objective.  Authors are required to explain the data.  It’s in this explanation that things get hairy.  It would be one thing if scientists simply reported the basic relationships between numbers (i.e., this group’s mean was larger than this other group’s mean), but we’re asked to go beyond even that.  We have to prescribe a “why” to any data trend (or lack thereof).  Regardless of your thoughts on the nature of causality, it’s a pretty dangerous affair — especially if we are counting on scientific writing to be completely objective.  Scientists are going to describe data in a way that makes sense to them, and that often involves their background and history with the field.  I’m going through this now with a co-author, who has a different interpretation of our data set.  The mere fact that we have arrived to different conclusions ought to mean that we shouldn’t include any interpretation of the data in our report, if we are aiming to be objective.  After all, how objective is it if two people, who work in the same field, report the same data set differently (if only slightly)?

I don’t want to get too involved in a discussion about funding.  For the most part, scientists are going to do what’s best for science.  But, every scientist feels the need to #1, justify their existence, and #2, justify their funding.  It’s not the scientists fault either;  journals often push for their authors to draw links between their research and applied areas.  There’s a push to move away from “pure science” or “science for the sake of science” by drawing parallels between Important Issues and current studies.  This pushes, and will continue to push, scientific writing away from the realm of objectivity and things become more needlessly speculative.  Let’s not mention that this affects how data are interpreted.

Journals like PLoS ONE (Public Library of Sciences) are requiring authors submit their complete data sets whenever they have any experiment published.  This is a move in the right direction, at least in light of objectivity.  Not only does it lend an increased degree of accountability, but it allows the audience the ability to look at the unbiased, agenda-less numbers for themselves.   Scientific writing seems to be moving in two completely opposite directions, both towards and away from subjectivity.  Maybe one day, we’ll feed a computer all of our experiments’ specifications, give it the data, and it will automatically generate an article about the study.  There are ways to remove the human factor from scientific writing, but journals (or magazines) like ScienceNature, or Psychological Science seem less than interested to move in that direction.

Tagged , , , , , ,

Reviews Now at Earbuddy.net

My activity here has dropped off since September.  You can find most of my writing at Earbuddy.net.  It’s a good site, and you should check it out.

I’ll continue writing for I Might Be Wrong, but future posts will be less review-oriented.  I suspect I will talk more about more thoughts on science, philosophy, and psychology here now that I have somewhere to air out my “criticism”.