hosted by EBSCO; summary at Eventbrite. Disclaimers: there was a free lunch; I love open access; I’m appropriately suspicious of vendors and vapourware; and (I didn’t think this would be relevant before attending, but…) I like zebras.

FOLIO is “a community collaboration to develop an open source Library Services Platform (LSP) designed for innovation”.

Introduction from EBSCO

  • Vendors – Ebsco, ByWater, SirsiDynix
  • ‘Open’ orgs – Koha, Index Data, Open Library Environment
  • Universities – Cornell, University of Sydney, Aberdeen, Glasgow, Newcastle, Università di Roma, National Széchényi


  • Will support ILS functions but broader – a ‘library services platform’ [à la Alma etc]
  • Each function as its own app – so can create completely new apps eg data mining, IR integration, learning management, research data, predictive analytics, grant management]


  • Apps from around the world built by commercial vendors who may charge, and by libraries who probably won’t. Can buy professional services.

Introduction from Peter Murray (open source community advocate for Index Data)
“an open source Library Services Platform built to support ILS functions and to encourage community innovation”


  • a platform intended for people to build on – a healthy platform depends on how much people contribute to it, which depends on the platform making this easy
  • made up of services
  • geared towards libraries – patrons, bibliographic records, authority records


  • create community where libraries can come together to innovate
  • leverage open source to reduce the “free as in kittens” costs
  • improve products by involving libraries more in development
  • bring more choice to libraries – eg multiple circulation apps you can switch between if one doesn’t suit; replace the fines app with a demerits app

Technical stuff:

  • “APIs all the way down”; inspired by microservices so can interface with the core through standard HTTP/REST, JSON/XML, etc; cloud-ready: scalable, ready for deployment on cloud but not bound to a particular vendor. Building with AWS as reference but could be run on Azure, on private VMware, etc.
  • Middleware inspired by the API Gateway pattern. (Core Okapi [this is where the zebras come in: the okapi is in the zebra family] is mostly complete, developers starting to work on functionality.)
  • Multi-tenant capability built-in
  • Vert.x; RESTful style, JSON for data format; request/response pipelines eg first request routed to authentication module then sent to next module; Event Bus that can be exposed with various protocols (eg STOMP, AMQP)
  • Dynamic binding – dependencies are interfaces, not implementations – allows you to replace circ module with another one that respects the same interface


  • self-contained http services (programming-language agnostic) – small, fast, do one thing very well
  • Okapi gateway requirements – hooks for lifecycle manage, strong REST/JSON preference (some libraries hosting hackathons with their comp.sci. department students)
  • might be grouped into applications (with dependencies) eg cataloguing, circulation


  • Stripes – a user interface toolkit to let you quickly build the UIs you need to speak to the backend

Metadata – the FOLIO Codex

  • Takes concepts from FRBR (work, instance, holdings).
  • Format-agnostic (MARC, MODS, DC, whatever): core metadata “enough for other modules to understand”; native metadata “for apps that understand it” (eg circ module needs a title but doesn’t care about all MARC subfields or alternate title or or or…
  • Original format gets derived into FOLIO Codex (with work, instance, holdings) which gets used in modules. Current debate in the community about whether the original format should also be part of the codex.
  • Support multiple bib utilities and knowledge bases. Maintain list of local changes. Automated and semi-automated processes for updating local records with changes from source. “Cataloguing by reference”.

Timeline: Aug 2016 opened github repositories; Sept 2016 Open Library Foundation created to hold IP but licensed Apache; phase 1 Aug 2016-2018 (availability of FOLIO apps to run library (ILS)) followed by extended apps.
Project plan:

  • 2016 built gateway, sample app, UI toolkit, but also SIGs
  • Jan-Mar 2017 built circ, resource management, user&rights management, but also documenting
  • Apr-Jun 2017 acquisitions, system ops, knowledgebase, and onboarding dev teams
  • Jul-Dec 2017 apps marketplace and certification, discovery integration
  • 2018 vendor services and hosting, implementation, migration, data conversion, support


Community engagement
Lots happening on Slack channels, many meetups

Governance / lazy consensus
Open Library Foundation > Folio > Folio product council > SIGs > Development

OLF – 501(c)(3) (took a lot of time to get this status as had to prove EBSCO resources it but doesn’t control it) – mission to help libraries develop open stuff to support libraries, research, learning and teaching. Board inc Texas A&M, Duke, California Inst of Tech, EBSCO, JISC, CALIS (China).

Dev cycle:
SIGS >(Design process)> Design Teams >(Requirements process)> Analytics Teams >(Development process)> Dev Teams >(Review & feedback process)> SIGs

SIGs currently on topics like metadata management, resource access, user management, internationalisation

When OLE got libraries to map requirements, got 6000+; went back and said we need to cut this down, so they came back with only 3000+. Processes for FOLIO project to identify which ones needed by July 2018

Dev team – anyone can join in (biweekly check-ins, open toolds with wiki, forums, Slack, GitHub) but takes time/effort to really enjoy and contribute

Lots of other companies build something then demo and ask for feedback – by which time it’s too late to provide really meaningful feedback. FOLIO is getting the feedback during/before the dev process.

This was on a working FOLIO instance. UI still very(!) sketchy but nav bar along the top with apps, eg users, items, scan. Demo’d searching/filtering users; switching to items and back and the search results still display; search for an item to copy barcode; switch to scan, lookup user, paste in barcode, click ‘checkout’ button, switch back to users and can see user now has book borrowed; switched to items and can see item now checked out.

[My current thoughts: this is clearly not production-ready at present, and even assuming everything stays on track for the rest of phase 1 I wouldn’t consider implementing it in 2018 – but I think it’s worth keeping an eye on. And the open nature of the development makes keeping an eye on its progress easy.

One risk I see in the architecture is that it’d be quite possible for every library to be running a different set of modules which may complicate community troubleshooting. This is by design and also a strength (so public libraries don’t get forced into an academic mode of thinking, or vice versa, or both get forced into some terrible compromise), and the requirement that everything be built around core APIs and data structures probably mitigates much of the mess it could otherwise turn into.

Relatedly, a proliferation of similar but subtly different modules which are each used by only a few libraries could also be a problem. At the moment for example in the user module, the data fields are fixed. If you wanted to add eg preferred language for communications, you’d have to create an entirely new module. But it sounds like there’ll be some work in future to allow a certain amount of customisation so you could still use the same basic module.

I also see a risk in the marketplace potentially getting full of pay-for modules. Hopefully it gets populated with enough free modules to start with to keep things on an even keel – or even tilted towards open as vendors find limited demand for a pay-for module when there are so many free competitors. I could see a freemium model develop… The fact that there are so many libraries and open-friendly organisations involved from the start is promising.]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.