The topics of change and innovation, specifically technology-related, intrigue me. I read about disruptive innovation, diffusion of innovation, and continual improvement process. At this point, I’m still trying to wrap my thinking as to how these relate and when can/should they be applied in higher education. I have more questions than answers, so I seek new knowledge and perspectives to make sense of it all.
I work in the technology field within higher education, where I’ve witnessed and implemented business processes enabled by technology since the mid-1990s. In the last few years, the pace at which technologies change has become even faster. Who would have imagined the growth and impact of social media, cloud, mobile, and big data just five years ago? Last year, I started noticing more articles about wearable computing and the “internet of things.“. The blurring of the lines between computing services and products only available via IT departments years ago and those readily available to consumers, also known as “consumerization of IT,” has only become more pronounced in the last few years. These changes have provided opportunities and introduced new challenges. All these observations have led me to become more interested in anticipating where the future of higher education and technology may be heading.
If change and innovation in higher education are only about technology, maybe it would be easy if not because change involves culture, politics, traditions, paradigms, and personalities. Technological changes happen within how higher education views itself regarding its perceived roles (preparing students for careers, providing civic service by molding students as productive citizens, research) and how it operates (shared governance, teaching methods, funding priorities, etc.). There is no consensus on these views. The role of faculty and teaching methods are now being challenged in light of new learning opportunities provided to students because of technology, including Massive Open Online Courses (MOOC) and personal learning networks. Current technologies have also added a new spin to the old debate of how individuals learn (objectivism vs. constructivism).
Beyond philosophical debates about the role of technology in higher education, from a practical perspective, it takes time and resources to introduce and implement new ways of using technology. It’s a process, and the process involves human emotions. As one who works in IT, my role is a service provider to my university’s communities of staff, faculty, and students. At the core of my responsibility is ensuring the systems they use work appropriately as they would expect. Network outages and disruption of applications/web services are what we try to avoid.
Given that failures, trial-and-error, and not-so-perfect systems that lead to disorders of services are all part of the process when introducing new systems, how do organizations balance the need to manage stability and provide room for transformational (potentially disruptive) innovations? How do organizations gain buy-ins from faculty, staff, students, and administrators to adopt new systems and ways of doing things? More importantly, the question is when and how do we know when to apply incremental improvements vs. introducing a new way of doing things and disrupting the system?
I’m hoping someone in higher education has figured out the answers to the questions I pose above because I have yet to figure all these out. Let’s talk if you have figured it out or have some ideas.
n light of new learning opportunities provided to students because of technology, including Massive Open Online Courses (MOOC) and personal learning networks. Current technologies have also added a new spin to the old debate of how individuals learn (objectivism vs. constructivism).
Beyond philosophical debates about the role of technology in higher education, from a practicality perspective, it takes time and resources to introduce and implement new ways of using technology. It’s a process, and the process involves human emotions. As one who works in IT, my role is a service provider to my university’s communities of staff, faculty, and students. At the core of my responsibility is ensuring the systems they use work appropriately as they would expect. Network outages and disruption of applications/web services are what we try to avoid. Given that failures, trial-and-error, and not-so-perfect systems that lead to disorders of services are all part of the process when introducing new systems, how do organizations balance the need to manage for stability and provide room for transformational (and potentially disruptive) innovations? How do organizations gain buy-ins from faculty, staff, students, and administrators to adopt new systems and ways of doing things? I suppose, more importantly, the question is when and how do we know when to apply incremental improvements vs. introducing a radically new way of doing things and disrupting the system?
I’m hoping someone in higher education has figured out the answers to the questions I pose above because I have yet to figure all these out. Let’s talk if you have figured it out or have some ideas.
image credit: http://www.innovation-post.com/what-is-the-difference-between-innovation-management-and-change-management/