I am sure this isn’t news to anyone, but it appears as though much of the web is in a constant state of beta. Think Gmail and its constant little reminder that this isn’t a real service … heck it could go away at any moment. I use it, my wife lives in it, and lots of people at the University forward all of their mail to it. Does anyone care it could disappear? I use Flickr for all my online photo storage and sharing … I do this even though I have a .Mac account that is not a “gamma” product like Flickr. I notice lots and lots of people spending time in there … the list just rolls on — you know, services that we rely on that we just turn the other cheek to the reality of their potential lack of staying power.
In higher education we seem to have some very strict definitions of what a service means … in my higher education administrator’s mind a service is a fully supported tool that has close to five nines reliability. The five nines thing is questionable in reality, but plays well as a goal. I have to tell you that I am now seeing what it takes to support user populations that number in the 100s of thousands I am much more careful of the words I use to describe things. Take the Podcasts at Penn State project … do we have a podcasting service here at the University? Sorta … our iTunes U implementation is in pilot and so are all the supporting pieces to the podcasting stuff going on here. That means there really isn’t any true PSU HelpDesk support, no 24/7 server management, and there certainly aren’t any money back guarantees that it’ll be a five nines environment. But at the end of the day, should that all matter? At the moment, we plaster the word pilot all over the thing, but we have gone from about 35 faculty podcasting last semester to well over a 100 this semester … adoption is happening simply because we have taken the plunge and created an opportunity.
With the growth in the beta mentality of Internet users should we start a new classification of Internet Tools? Maybe instead of talking about our services, we should talk about our emerging opportunities. So as we look to release the first bit of the Blogs at Penn State project we could look at it as just an emerging opportunity for members of our community to engage in. Not a full service. Could we create a new classification of opportunities that aren’t judged by their total up time or a promise that they will always be available — even over the long haul?
Can we create more agility of we work to establish a set of experimental opportunities that our communities can simply engage in while they are available? The good ones with high adoption rates would then get the attention they need to become services … if it works for Google Labs, why can’t it work in an environment where technical resources are spread very thin? I’m not talking about creating environments that would compromise end user security, privacy, or data … the building blocks to manage and protect identity are well established in our enterprise. I am simply asking if we could take a step back from the idea that everything has to scale to our total user base out of the box.
This is me just thinking out loud here … anyone have any thoughts on it all? With a change in mentality could we all offer a greater experience to our users? Or would we be cheating them by not making it all bullet proof? Just a set of questions that have been banging around in my head for a while now … thoughts?
I am always a little wary of porting business solutions (or methphors) into education (higher or K-12). Here I think the problem is that if you move resources toward the popular service or solution you potential lose interesting work being done my the minority that find the less popular tools useful. There is no way in that system to measure, or take into account, tools that are extremely powerful for a small group (the long tail notion). This seems to me to run against the idea of what universities are designed to do — work that is interesting to a small group of folks that has the potential to make large impact (or not). This is the foundation of the idea of basic science, that it need not have applications or be popular or for that matter even understood by the general population, but that is pushes the field. Is this any different with regard to technical resources? Do we really want them going to the most popular uses of technology on campus? What would that look like? What would it mean to be an innovator on a campus where the innovation must be immediately (or quickly) very popular to be fed resources?
The question I am really asking is how do we become more agile in higher education? If we don’t find ways to take risks with emerging trends then we are dead in the water. If we don’t embrace a beta style mentality can we test out new approaches to old challenges? Or should we only focus on large University wide projects like email, calendars, and back office systems? We are living in a new era of technology utilization for teaching and learning … how do we move the tail forward more quickly on our campuses?
I’m hearing two issues: the idea of perpetual beta, and the notion that a tech unit can create some level of structure for the management of exploration/emerging tools that evaluates those tools and moves selected ones along toward sustained support and integration (or your definition of a “service”).
I’ve always thought of the ongoing beta status of gmail, flickr and the like as the webapp form of versioning; instead of downloading updates or new versions, web-based tools fix/update without the necessity of distributing those updates. Thus, they don’t have to “announce†updates I the same way that client-based software needs to. In short, I don’t see “beta†as a clue to a possible lack of staying power. I see it as analogous to purchasing installed software X v2.0, understanding the implication that there will be an X v.2.1 distributed at a later date. Maybe this is a fundamental misunderstanding of perpetual beta on my part.
The second issue is very interesting: how can an organization evaluate new technology in a planned, controlled way that recognizes the potential for some tools and moves those high-potential tools along toward a point where they integrate into the IT support structure (or “serviceâ€)?
At Educause 2006, some folks from Duke had a great presentation that outlined their phased approach to transitioning innovations from idea/experiment to full-blown service. They basically see three phases:
1. Experimentation- white papers, pilot projects, etc.
2. Extension & Transition- support for promising tech; identifying use cases
3. Standard Support & Integration- widely available & supported (helpdesk, etc).
Their presentation files are available at: http://www.educause.edu/ir/library/pdf/EDU06190.pdf
They would seem to agree with your point about problems with scaling to the total user base straight out of the box; resources would most likely not allow for an approach like that. Some of these ideas are addressed in innovation/change management literature, specifically stuff on “ambidextrous organizations†(organizations that simultaneously engage in ongoing/mature operations and innovation/exploration operations). What the literature seems to indicate are those organizations that include both ongoing service operations and longer-term innovation efforts are more successful in diffusing radical innovations. In other words: agile is good, but agile and stable in the same organization can pay dividends.
My sense from the presentation was that a structured approach helps by allowing for risk-taking with emerging tech while providing some measure of assurance that selected tools have been chosen for a reason and not simply on the basis of novelty. Early on, the faculty are in more of a partnership role, so “emerging opportunity” would probably be more accurate than “service” through the first two phases.
Interesting topic.