Since first talking about this I’ve been pondering what topics would make good candidates to try out the model. I think it should be something that:
- is of interest to as many people as possible; and
- can be contributed to by as many people as possible;
- as easily as possible.
With these criteria in mind I’ve come up with two possible ideas:
A. Trends in patrons’ use of electronic equipment in the library
This is basically an extension of the article that inspired my thinky thoughts to start with, which did headcounts to measure laptop use in their library. We could extend this to, say, a headcount of
- total people, of course;
- users of library computers;
- users of personal laptops;
- PDAs;
- cellphones;
- and a handy ‘other’ category.
We could decide what time(s)/day(s) to run the headcount on, set up an online spreadsheet, and anyone wanting to participate could do their headcount and enter the data into the spreadsheet. Whether people can only participate once, or can do it recurrently, there’ll be value either way. It’s simple and quantitative and easy.
B. Librarians’ perceptions of the quality of vendor training
(ie training provided by vendors in the use of their products to librarians, in case that’s not clear)
This is. Perhaps a delicate topic. I’ve been thinking for a while about blogging about my own perceptions, all aggregated and anonymised but it still feels a bit “bite the hand that holds all our resources”, because my perceptions are not good. But perhaps it would be less awkward if it came from a whole lot of librarians. And vendors are starting to respond more and more to concerns raised in social media so maybe it would actually get some attention and help vendors provide better training.
OTOH this would be an inherently messy topic to research. It’d be a good test of whether crowdsourcing a qualitative research topic could work, but perhaps not a good test of whether crowdsourcing research per se is workable. There’d need to be a lot of discussion about what exactly we want to research:
- Likert scales of measures on eg amount of new info, amount of info already known, familiarity of trainer with database, ability of trainer to answer questions…?
- more freeform answers about problems with presentations eg slides full of essays, trainer bungles example searches…?
- surveying trainers themselves to find out what kind of training they get in how to give a good presentation?
So.
So, for anyone interested in going somewhere with this — or just interested in reading the results — what do you think? Topic A, topic B, topic C (insert your own topic here), or all of the above?