The topics of change and innovation, specifically those related to technology intrigue me. I read about concepts of disruptive innovation, diffusion of innovation, and continual improvement process and at this point, I’m still trying to wrap my thinking as to how these relate and when can/should they be applied in higher education. Frankly, I have more questions than answers and so I continue to seek new knowledge and perspectives to make sense of it all.
I work in the technology field within higher education where I’ve witnessed and implemented business processes, enabled by technology, since the mid 1990’s. In the last few years, it seems the pace at which technologies change have become even faster. Who would have imagined the growth and impact of social media, cloud, mobile, and big data just five years ago? In the last year or so, I started noticing more articles about wearable computing and “internet of things”. The blurring of the lines between computing services and products only available via IT departments years ago and those readily available to consumers , also known as “consumerization of IT“, have only become more pronounced in the last few years. These changes have provided opportunities and introduced new challenges. All these observations have led me to become more interested in trying to anticipate where the future of higher education and technology may be heading.
If change and innovation in higher education is only about technology, maybe, just maybe, it would be easy, if not for the fact that change involves culture, politics, traditions, paradigms, and personalities. Technological changes happen within the context of how higher education views itself in terms of its perceived roles (preparing students for careers, to provide civic service by molding students as productive citizens, research) and how it operates (shared governance, teaching methods, funding priorities, etc). There is not a consensus on these views. The role of faculty and teaching methods are now being challenged in light of new learning opportunities provided to students because of technology, including Massive Open Online Courses (MOOC) and personal learning networks. Current technologies have also added a new spin to the old debate of how individuals learn (objectivism vs. constructivism).
Beyond philosophical debates about the role of technology in higher education, from practicality’s perspective, it takes time and resources to introduce and implement new ways of using technology. It’s a process and the process involves human emotions. As one who works in IT, my role is a service provider to my university’s communities of staff, faculty, and students. At the core of my responsibility is to make sure the systems they use work properly as they would expect. Network outages and disruption of applications/web services are what we try to avoid. Given that failures, trial-and-error, not-so-perfect systems that lead to disruptions of services are all part of the process when it comes to introducing new systems, how do organizations balance the need to manage for stability and provide room for transformational (and potentially disruptive) innovations? How do organizations gain buy-ins from faculty, staff, students and administrators to adopt new systems and new ways of doing things? I suppose more importantly, the question is when and how do we know when to apply incremental improvements vs. introducing radically new way of doing things and disrupting the system?
I’m hoping someone out there in higher education has figured out the answers to the questions I pose above because I have yet to I’ve figured all these out yet. If you have figured it out or have some ideas, let’s talk.
image credit: http://www.innovation-post.com/what-is-the-difference-between-innovation-management-and-change-management/