I was delighted to be able to go to IWMW 2013 this year. IWMW (the Institutional Web Managers’ Workshop) is an inspiring and entertaining three-day conference for institutional web folk (webmasters, web developers, web managers, web editors… whatever we’re calling ourselves in this job, this institution, this decade); it has been organised by UKOLN, so this may be its last year in its current format now that Jisc have dropped funding for UKOLN. Appropriately for this time of uncertainty, the conference theme was “What next?” — not just for IWMW but for web management, the web community, and HE in general.
I’m attempting to write up all the plenaries and both the parallel sessions which I attended. This will be a relatively neutral writeup, partly for my own record, partly for the benefit of colleagues or other interested people who couldn’t attend; I’m aiming to do some more personal opinion-based pieces on specific bits later.
Day 1 Plenary Sessions
Brian Kelly: Welcome to IWMW 2013
Brian gave us an overview of the current situation in the world of the institutional web: the continued financial challenges we all face (an unsurprisingly prevalent theme throughout the conference) and the changing technical environment.
He also gave us a whistle-stop tour of the history of IWMW, from the first meeting at KCL in 1997 (before the name ‘IWMW’ was coined) through to 2013; it was interesting to see how the buzzwords and focus shifted: web strategy, e-business, web 2.0, APIs, video streaming… then from about 2009 onwards there were constant rumblings of “change”, “managing change”, “uncertainty” and similar concerns — and for UKOLN, of course, all that anxious uncertainty resolved into an awful reality earlier this year when Jisc cut the organisation’s funding and decimated its staff.
With so much doubt about the future, I thought it’d be hard to begin the conference on an optimistic note; but as always Brian’s enthusiasm for the institutional web and the IWMW community was infectious, and without further ado we launched into the plenary sessions…
P1: Cable Green: Open Education:
the business and policy case for OER
Cable Green explained that open educational resources (OER) let us “take advantage of the technical and legal tools of the day to make sure everybody on the planet has access to education“. This balance of the practical and philosophical basis for OER sums up the tone of his talk; he moved effortlessly between lucid explanations of the ins and outs of Creative Commons licences and passionate advocacy for the social benefits of freely and openly available educational resources.
However, as he clearly showed, there are tensions between technology, morality and legality; the technology means that “we are now in a read/write world“, and educators and publishers are still adapting to that change. We now have the ability to share resources at the speed of light, but the law prevents this; Green pointed out that copyright can be a positive thing, but it hasn’t kept pace with the available technology.
Green recalled his own OER journey, from the first time he published his own course materials on the web, inviting people to use them (“it’s free!”) without realising that the institutional copyright statement at the bottom stymied his attempt to give his work away. He discovered that “to get your work into the public domain, you have to die! and then wait 70 years!” — but as educators, we want to share now, not 70 years after our death.
He gave a clear explanation of the Creative Commons framework (“the backbone of OER”), the way the licences are a sliding scale of “how free” something is, the importance of open licensing for internationalization and accessibility (translating and creating accessible alternatives is often prevented by more restrictive copyright) and for customisation and affordability of educational resources (allowing you to modify, modularise, only take the bits you need).
If the marginal cost of producing and distributing digital resources is effectively zero (this was illustrated with figures comparing the cost of “copy” for hand-copying a book, printing it traditionally, ‘print on demand’, and digital copying) then, argued Green, educators have a moral responsibility to share. “Publicly funded resources should be openly licensed resources“, and, as Winston Churchill put it, “If you have knowledge, let others light their candles with it“.
His final soundbite or ‘thought for the session’ was this: “the opposite of ‘open’ isn’t ‘closed’. The opposite of ‘open’ is ‘broken’.”
P2: Doug Belshaw: Mozilla, Open Badges and a Learning Standard for Web Literacy
Doug Belshaw gave an overview of the Mozilla Open Badges infrastructure and how it underpins the new learning standard for web literacy which is being developed.
Open badges, he explained, are essentially just “images with metadata hard-coded into them” (not unlike Creative Commons licences as explained in the previous session). They’re a “portable credential” which can be embedded in digital content, and they “can accommodate formal and informal learning pathways”, capturing learning wherever and however it occurs. This simple but powerful infrastructure allows any organisation to issue its own badges, and lets users bring their badges together into a single ‘backpack’ or portfolio — breaking through the “silos of accreditation” which currently constrain our qualifications.
Of course, open badges can represent any qualification, accreditation, achievement or statement about a person’s experience, from a university degree to the example ‘Open Badges 101’ badge you can claim from the Open Badges site — or to badges which aim to change behaviour, such as a badge for releasing resources under a CC licence. The value of an open badge comes from how rigorous the criteria are.
Belshaw then moved on from general issues of qualifications and competencies to the specifics of the open learning standard for web literacy that is currently being developed. Mozilla are working on the ‘skills layer’ now with the web community: this is the time to get involved.
P3: Kyriaki Anagnostopolou: Et tu MOOC? Massive Online Considerations
MOOCs are looming large in the HE landscape at the moment; it’s not clear if they are ‘the answer’ (and if so, to what?) but Anagnostopolou’s talk gave us plenty of questions.
She started with some useful facts and figures about: levels of participation in and completion of courses (with the caveat that these ‘traditional’ measures of success may not apply); the costs of running MOOCs (where estimates vary so wildly that direct comparisons are often impossible); the way institutions are currently funding them (usually through marketing or ‘widening participation’ budgets); and the ‘openness’ of MOOCs (the data often isn’t ‘open’).
Then on to the questions, a mixture of practical and philosophical considerations (I’ve only included a selection):
- Is a MOOC the new textbook? Should we consider integrating MOOCs from other institutions into our teaching?
- Are we as institutions prepared to be judged based on the online learning experiences that we offer? They’re not necessarily representative of our campus-based courses…
- As a tutor/facilitator, how do you deal with a 1:8000 staff-student ratio? How do you make your presence felt and make students feel supported in the massive global classroom?
- Should MOOCs count for accreditation?
- Will MOOCs change the expectations of traditional campus-based students?
She also raised the questions of how ‘learning analytics’ will work as a new research area, what the business model of MOOCs might turn out to be, and the broader consideration of what education is actually about: is it simply transmission of content or a more holistic experience? Lots of food for thought!
P4: Amber Thomas: Turning our attention to supporting research
Amber Thomas talked about how the landscape of research is changing and how we as digital experts could do a lot more to support researchers (especially early career researchers), as well as giving specific examples from her own department at Warwick.
The funding and evaluation of research has changed: many funders now insist on Open Access publication; a new focus on impact is changing the notion of where research happens and who the end users are; and research data is now a more prominent part of the research process.
Lots of the signs of change are already familiar to us: academic blogging, open lab notebooks, collaborative texts, crowd sourcing, citizen science, open access research papers, public datasets… and they’re all pointing to a more participatory and public scholarly discourse. Public engagement doesn’t just mean putting up information on the institutional website — it can be “the long tail of scholarship“, making research more accessible to all.
The changed research landscape is also more collaborative and interdisciplinary — and there’s a danger that this doesn’t fit well into the fixed web structures we maintain. Research these days is more social: and the social is happening outside our institutional websites. Research is also broadening in terms of what ‘counts’ (no longer just traditional publications but research datasets, code, blogs, slidedecks, podcasts, videos…). More collaboration and interdisciplinarity means that research is happening between specialisms, across departments, across borders, outside the university.
Out of this diversification came Altmetrics, the new movement towards new ‘social web’ metrics for analyzing and informing research and its impact. Within this movement “some fundamental questions are being asked that could change how we manage and evaluate research within universities“. Research footprint monitoring provides a way to collate usage data “from where your research lives and breathes”, and feed this back onto institutional web pages through APIs, feeds, and widgets. (Examples of specialist aggregators of analytics from research outputs include AltMetric, ImpactStory, PlumX.)
Thomas then talked about her work in the new Digital Humanities department at Warwick: the themes and technologies which are emerging in research support (CMSs/databases, visualisation tools, social media, impact and analytics); and the importance of the one-to-one conversation between digital technologist and researcher in the requirements gathering process — and how to maintain that relationship throughout the life of a project. She concluded by saying that the implications for the institutional web are that we really have to get good at the following:
- re-aggregating distributed content analytics
- using third party specialist platforms (and related risk management)
- using data and databases, throughout the research lifecycle
- preservation and archiving
- being technology collaborators in complex projects
- responsive innovation through to service provision
and finally, admitting that we don’t know the answers sometimes!