ChatGPT, AI, and Higher Education

The following is a list of personal blog posts focusing on the applications and implications of ChatGPT in higher education. The topics covered range from the potential benefits and applications of ChatGPT in student learning and teaching to the risks and challenges of implementing it in campus cybersecurity and student conduct. Other topics include the impact of ChatGPT on higher education, its coding functionality, and its use in social justice and Filipino-American history education. Some posts also explore the future of higher education and the role of ChatGPT in it, and its potential to serve stakeholders through the Jobs-To-Be-Done theory.

Collection of higher education and ChatGPT resources, events, and articles.

==================================================================

Resources

Trainings

Presentations

ChatGPT’s General Use and Implications in Higher Education

ChatGPT as Training/Quiz Generator

ChatGPT as a Learning Tool

ChatGPT and Cybersecurity

ChatGPT for Workplace Efficiency

 

 


“Social Imagination” as AI Framework

How can we discuss AI decisions that center on the human experience and societal impacts and consider the future implications of our actions? 

This blog post introduces the concept of Sociological Imagination as a framework for technologists, policymakers, affected communities, and leaders to take a holistic and balanced approach to Generative AI discussions and decisions. 

The concept of sociological imagination, which I interpret as the intersection of one’s identity with society and history, was coined by C. Wright Mills in 1959. He defined it as the ability to see the relationship between individual experiences and more significant social influences. Dr. John Cruz introduced me to the concept in a UCSB Filipino-American History course as an undergraduate student in 1992. It has shaped how I have examined my the intersectionality of my identities and leadership approach. The concept provides context to and the impact of my decisions not just today but for the future.

Maxine Hong Kingston, a renowned Asian American author, reminded the audience of our role and the potential of our actions to impact future generations at a UCSB Asian American Studies event. When asked about her view on activism in the 1960s compared to the present, her response highlighted that our actions might have impacts that extend beyond our generations. This idea reinforces the belief that “the past constrains the present and the future is shaped by the present,” underlining our opportunity—and responsibility—to shape the future, particularly in AI, with the help of Sociological Imagination.

At a recent University of California Artificial Intelligence Congress, one message was clear: AI’s future is not inevitable, and we have the opportunity and responsibility to shape that future intentionally. 

On social media, at universities, and across the higher education sector, discussions about AI cover a wide range from personal to societal impacts. These discussions range from detailed, tactical aspects to broad, strategic considerations, including AI’s potential to improve efficiency or even transform higher education completely. Often, these varied discussions point out that some perspectives might be too narrow, overlooking larger societal effects and future outcomes. At the same time, others may be too broad, missing crucial personal implications.

Another ethical debate or emphasis exists around AI and its role in the workplace – whether to replace or enhance humans. This discussion focuses on the potential for AI to either automate jobs, leading to job displacement or augment human workers’ capabilities, thereby increasing productivity and job satisfaction. This debate is critical as it directly affects individual livelihoods and organizational effectiveness, and it raises questions about the kind of future we want to create through the implementation of AI technologies.

This observation highlights the need for a holistic and balanced approach. This approach should look at both the small details and big picture of using AI and what our choices will mean for the future. It’s important to handle immediate problems and chances while also thinking ahead about the long-term impacts these technologies will have on people, places, and society.

I propose using Sociological Imagination as a framework to guide our conceptualization and implementation of Generative AI.

Sociological imagination can help us understand AI better by showing its wide effects and possibilities. For example, in healthcare, it can show us how AI might change doctor-patient relationships and who gets healthcare. In education, it helps us see how AI could affect how students learn and teachers teach. This approach gives us a broader view of AI’s role in society. Here are some ways to frame AI from a sociological imagination perspective.

1. Bridging Individual and Collective Experiences

Sociological imagination helps in connecting individual experiences with AI to collective societal outcomes. For example, personal encounters with AI-driven services can inform broader discussions about privacy, data security, and user consent. When we look at how people interact with AI, it’s important to think about how these interactions shape what we expect from technology. We need to develop AI systems that protect individual freedom and benefit everyone in society.

2. Highlighting Socioeconomic Impacts

This framework encourages examining the socioeconomic disparities that AI might exacerbate or mitigate. Talks should focus on how AI can make things faster and give more people information. But, we must also consider the downsides, like losing jobs and growing gaps between rich and poor. Using sociological imagination, stakeholders can strategize on deploying AI to address rather than deepen social and economic divides.

3. Incorporating Historical Lessons

Sociological imagination involves learning from past technological and social shifts to predict and shape AI’s impact. Studying history helps us see how technology has transformed industries and societies. It teaches us how to manage changes, reduce harm, and take advantage of new opportunities. This historical perspective can be crucial in anticipating and strategically preparing for AI’s long-term consequences.

4. Envisioning Future Scenarios

Utilizing sociological imagination in AI discussions also means thinking about the future societal implications of AI integration. It aids in envisioning future scenarios based on current trends, from romantic to dystopian. This forward-looking approach helps in designing AI policies and technologies that are adaptable and resilient, ready to handle unexpected societal changes and challenges.

5. Ethical and Cultural Considerations

AI discussions guided by sociological imagination naturally include ethical and cultural considerations, recognizing the diverse contexts in which AI operates. These discussions involve questioning who is programming the AI and whose values are reflected in its operations. The goal is to ensure that AI systems are culturally sensitive and ethically designed, promoting fairness and avoiding biases that can harm underrepresented groups.

6. Facilitating Multi-stakeholder Dialogues

Sociological imagination fosters inclusive dialogues involving multiple stakeholders—technologists, policymakers, affected communities, and ethicists. It plays a crucial role in understanding the intersection of individual experiences and broader social processes, fostering a sense of inclusion and understanding in these discussions.

Understanding sociological imagination is key to seeing how AI fits into our lives. This idea connects our personal experiences with the larger society. It pushes us to think about both small details and big effects of AI. By learning from history and looking ahead, we get a full view of AI’s impact on jobs, fairness, and cultural differences.

Using sociological imagination, we can ensure that AI is not only advanced but also fair and inclusive. It encourages open talks with tech experts, policymakers, and everyone else to make AI that meets personal and society needs. This way of thinking helps us make decisions about AI that improve our lives and tackle big social issues. So, let’s use sociological imagination as a guide, share our thoughts, and work together towards a better AI future.


Charting the Future of AI in Higher Education: An Invitation to Collaborate on a Higher Education AI Capability Maturity Model

As we navigate the evolving landscape and adoption of artificial intelligence (AI) in higher education, it’s become increasingly clear that a guiding framework is needed to help chart these efforts.

Since the launch of ChatGPT in November 2022, I’ve been closely observing how higher education institutions adapt to and embrace AI technologies. My observations suggest the vast potential of AI to transform our campuses, but it has also highlighted the complexities of effectively and ethically integrating these technologies.

In co-leading the AI Community of Practice at our institution, I’m excited about various AI-related initiatives emerging across departments. These are driven by genuine enthusiasm and a desire to innovate, and the campus can further enhance through a coordinated institutional-level effort. In addition, it’s as essential to acknowledge and learn from other institutions’ AI efforts to accelerate our campus initiatives. These observations led me to create the Higher Education AI Capability Maturity Model specifically for higher education—an initial attempt to develop a comprehensive framework that can unify disparate AI efforts cohesively and strategically.

This model is a starting point, a compass to guide campus leaders and communities through the initial stages of AI integration in higher education. It aims to evaluate current AI initiatives, foster discussions about future goals, and provide a benchmark for measuring progress against other institutions. Importantly, this model is designed to evolve through collaboration and shared insights from the broader higher education community.

I recognize that this initial model is beginning a journey toward a more robust and comprehensive framework. I invite the higher education community to join me in this conversation. I’m actively seeking ideas, feedback, and input to refine and expand this model, making it truly reflective of the diverse needs and aspirations of higher education institutions everywhere.

This effort is not just about creating a tool for assessment or benchmarking; it’s about fostering a culture of collaboration and innovation in using AI on our campuses. Let’s collaborate on improving this model and how we can ensure that our approach to AI is strategic, inclusive, and aligned with the core values of higher education.

Draft Higher Education AI Capability Maturity Model

Please get in touch with me at joepsabado@gmail.com if you’re interested in working together.


Crafting a Sustainable Future: A Long-Term Strategy and Roadmap for Integrating Generative AI in Higher Education

What is you campus’ vision of Generative AI (Gen AI)? Is your campus leadership still on the fence or reluctant to embrace this transformative technology?

Gen AI is likely to remain a key element in higher education, but currently, the sector’s response is often compartmentalized, reactive, and short-term, focusing mainly on immediate risks rather than embracing the technology as a long-term, revolutionary asset.

This perspective is based on the following observations:

A long-term and sustainable strategy requires key strategic pillars and a roadmap. Here are some ideas to consider.


Key Strategic Pillars:

Strategic Leadership and Governance

  • Governance Framework: Establish a committee with administration, faculty, staff, and student representatives to oversee AI policy and strategy.
  • Ethical Use Policy: Create a campus-wide code of ethics for responsible AI use.
  • Long-Term Vision and Strategy: Articulate a vision that aligns AI with the institution’s long-term goals, involving all campus community members.

Ethics, Governance, and Public Policy

  • AI Governance: Establish structures for AI governance, focusing on ethical considerations and inclusive representation from all campus groups.

Curriculum Development and Academic Enhancement

  • Course Development: Integrate AI topics across curricula for students and training programs for staff.
  • Workshops and Certifications: Offer development workshops and certification programs in AI applications and ethics for faculty, staff, and students.
  • Curriculum Integration and Evolution: Evolve curriculum to include AI in diverse subjects, creating AI-focused degrees and staff development opportunities.

Continuous Learning and Development

  • Development Programs: Offer ongoing learning opportunities in AI for students, faculty, and staff, including mentorship and leadership programs.

Research and Innovation

  • Research Agenda: Identify and pursue key AI research areas, leveraging strengths across faculty, staff, and students.
  • Startup Incubator: Support AI startups from the campus community, providing resources and support.
  • Collaborative Research and Development: Encourage collaborative research involving faculty, staff, and external partners.

Resource Allocation and Infrastructure

  • Faculty and Staff Development Funds: Allocate grants for AI research and pedagogical innovation, including staff training programs.
  • Infrastructure Investment: Upgrade facilities for AI, ensuring access for research and practical applications by all campus members.
  • Operational Guidelines and Training: Implement AI guidelines for administrative tasks, providing training for all relevant staff.

Community Engagement and Global Collaboration

  • Knowledge Exchange Network: Establish a platform (e.g. Community of Practice) for AI knowledge sharing among faculty, staff, students, and the broader community.
  • Public Seminar Series: Host seminars on AI’s impact, encouraging participation from the entire campus and public.

Career Development and Networking

  • Career Network: Build an AI career network offering guidance, job placement services, and mentorship, connecting students, staff, and alumni in the field.

Monitoring, Evaluation, and Sustainability

  • Annual AI Symposium: Host an event to showcase AI developments, inviting external experts and encouraging participation from all campus members.
  • Strategy Review: Regularly assess and update the AI strategy based on feedback from faculty, staff, students, and external developments.
  • Sustainability and Adaptability: Focus on sustainable and adaptable practices in AI ecosystem development, involving all campus sectors.

Alumni and Industry Engagement

  • Alumni Involvement: Engage alumni in AI initiatives, facilitating connections with current students, faculty, and staff.
  • Industry Partnerships: Develop partnerships for internships, research, and application insights, involving the entire campus community.

Outreach and Public Engagement

  • Community Outreach: Implement AI literacy programs for the community, involving staff and students in outreach activities.
  • Public Engagement Activities: Organize activities to promote public understanding of AI, encouraging participation from all campus members.

Future-Oriented Analysis and Scenario Planning

  • Predictive Analysis and Scenarios: Use analytics to anticipate AI trends, considering their implications for education and campus operations.

Financial Strategy and Sustainability

  • Funding Strategies: Develop plans for securing funding and ensuring ROI on AI investments, considering the needs of the entire campus.

Roadmap

Phase 1: Foundation and Awareness

Duration: 6-12 months

  1. Assessment and Awareness Building
    • Conduct an institutional audit to assess current AI capabilities and needs.
    • Initiate awareness campaigns about Gen AI for all campus members.
  2. Establishment of Leadership and Governance
    • Form a cross-functional Gen AI Steering Committee.
    • Define roles and responsibilities for AI governance.
  3. Initial Policy and Ethical Framework Development
    • Develop a preliminary Gen AI ethical use policy.
    • Start discussions on Gen AI implications in education and research.

Phase 2: Strategy Development and Planning

Duration: 12-18 months

  1. Strategic Planning and Vision Setting
    • Develop a clear, long-term Gen AI strategy aligned with institutional goals.
    • Set measurable objectives and key performance indicators (KPIs).
  2. Curriculum and Program Planning
    • Plan for the integration of Gen AI topics into existing curricula.
    • Design professional development programs for faculty and staff.
  3. Resource Allocation and Infrastructure Planning
    • Budget for Gen AI initiatives, including research, training, and infrastructure.
    • Plan infrastructure upgrades for AI support.

Phase 3: Implementation and Pilot Programs

Duration: 18-24 months

  1. Curriculum and Program Implementation
    • Introduce Gen AI topics into selected courses.
    • Launch training workshops and certification programs for staff and faculty.
  2. Pilot Research and Development Projects
    • Initiate pilot Gen AI research projects.
    • Establish a startup incubator for Gen AI innovations.
  3. Operational Integration
    • Implement Gen AI in administrative tasks as pilot programs.
    • Start upgrading infrastructure to support Gen AI applications.

Phase 4: Community Engagement and Expansion

Duration: 24-36 months

  1. Community Engagement and Knowledge Exchange
    • Organize public seminars and workshops on Gen AI.
    • Establish a Gen AI Community of Practice.
  2. Expansion of Curriculum and Research Programs
    • Broaden the integration of Gen AI across multiple disciplines.
    • Expand research projects and collaborations, both domestically and internationally.
  3. Operational Scaling and Further Integration
    • Scale up successful Gen AI applications in administration.
    • Continue infrastructure enhancements for wider Gen AI deployment.

Phase 5: Evaluation, Monitoring, and Continuous Improvement

Duration: Ongoing

  1. Regular Assessment and Strategy Review
    • Conduct biannual reviews of Gen AI initiatives.
    • Adjust strategies based on feedback and technological advancements.
  2. Continuous Professional Development
    • Provide ongoing learning opportunities in Gen AI for all campus members.
    • Keep curriculum and programs up to date with the latest AI advancements.
  3. Sustainability and Future Planning
    • Focus on sustainable practices in AI use.
    • Anticipate future AI trends and prepare the institution accordingly.

References:


The Ethical and Responsible Use of AI in Higher Education

What is the ethical and responsible use of AI in higher education?

I came across this concept of FATE: Fairness, Accountability, Transparency, and Ethics from Microsoft and other organizations. The question above, my curiosity about ethics frameworks, and the FATE concept led me to think more about the ethical and responsible use of AI in higher education.

The appropriate use of Generative AI like ChatGPT in the classroom dominates discussions on social media. But, applied to all facets within higher education institutions, how do campus community members and external stakeholders practice the principles of ethical and responsible use of AI?

This blog post is a high-level overview of this topic. I prompted ChatGPT with a series of prompts, and the majority of the content of this post is generated by ChatGPT.

The ethical and responsible use of Artificial Intelligence (AI) in higher education involves navigating a complex landscape of considerations to ensure that the deployment of these technologies is done in a manner that upholds the fundamental principles of fairness, transparency, and accountability, among others. The discussions around this topic can be enriched by understanding various perspectives and guidelines highlighted in academia and the broader educational sector.

Ethics Frameworks:

Ethics, a branch of philosophy dealing with what is morally right or wrong, has various frameworks that provide structured approaches to evaluating moral issues. These frameworks offer perspectives based on principles, values, or virtues, guiding individuals or organizations in ethical decision-making ​1​. Here are some frameworks and a brief description of each:

  • Virtue Ethics: Focuses on the inherent character of a person rather than on specific actions. It emphasizes virtues or moral character.
  •  Deontology: Based on rules and duties. It suggests an action is morally right if it follows a set rule or duty, regardless of the outcome.
  •  Rights and Responsibilities: Emphasizes the rights of individuals and the responsibilities that come with these rights.
  •  Consequentialism: Suggests that the morality of an action is judged solely by its consequences.
  •  Care-based Ethics: Stresses the importance of healthy relationships and the well-being of individuals and their interdependence ​2​.
  •  ETHICS Model: A theoretically grounded ethical decision-making model encompassing utilitarianism, moral relativism, and moral egoism as part of its framework ​3​.
  •  Framework for Ethical Decision Making (Markkula Center): A framework designed as an introduction to thinking ethically, aiding individuals in reflecting on the ethical aspects of their decisions ​4​.
  •  Framework for Making Ethical Decisions (Ethics at Work): Provides a summary of the major sources for ethical thinking and presents a framework for decision-making while considering multiple perspectives, ethical theories, and principles ​5​.

Ethical and Responsible Use of AI in Higher Education:

Understanding AI and its Implications:

It’s crucial for stakeholders in higher education, including educators, administrators, and leaders, to develop a foundational understanding of AI technologies. This understanding should extend to the design, implementation, and implications of AI in educational settings. A well-informed stakeholder community can engage in meaningful dialogue and decision-making processes concerning the ethical deployment of AI in higher education​1​.

Engaging stakeholders in higher education through workshops on AI technologies, offering courses on AI ethics, organizing public lectures by AI experts, developing online resources explaining AI, and establishing forums for open discussions help foster a foundational understanding of AI and its implications in educational settings.

Ethical Design, Implementation, and Governance:

Ethical considerations should permeate the design, implementation, and governance of AI systems. These considerations include leveraging existing legal and regulatory frameworks and developing new governance structures to ensure fairness, transparency, and accountability in AI applications within higher education ​1​.

Establishing ethics committees, developing ethical guidelines for AI research, conducting regular audits to ensure compliance with ethical and legal standards, incorporating mechanisms for accountability and redress, and engaging with external ethics boards form a framework for the ethical design, implementation, and governance of AI in higher education.

Addressing Equity and Discrimination:

AI presents unique challenges to equity in education, especially concerning commonly held views on discrimination. It’s vital to raise awareness about these challenges and work towards mitigating biases and promoting educational fairness ​1​.

Implementing bias detection and mitigation tools in AI systems, conducting regular training on equity and inclusivity, establishing channels for reporting AI-related discrimination, partnering with diverse communities, and collaborating with experts on equity are crucial steps toward addressing the challenges of equity and discrimination posed by AI.

Human Rights Perspective:

Ethically deploying AI in higher education also involves adopting a human rights perspective, ensuring that the technology does not infringe upon the rights of individuals but rather supports equitable access to educational resources and opportunities ​1​.

Ensuring adherence to privacy laws, conducting human rights impact assessments, establishing a human rights-centered framework for AI governance, collaborating with human rights organizations, and promoting transparency and informed consent are measures to align AI deployment with a human rights perspective in higher education.

Bias Mitigation and Privacy Protection:

The ethical aspect of AI use in higher education significantly revolves around addressing AI error, bias, discrimination, and data privacy. It’s essential to work towards reducing biases and protecting individuals’ privacy ​1​.

Implementing strict data anonymization and encryption standards, deploying explainable and auditable AI systems, establishing a robust data governance framework, conducting regular privacy audits, and engaging in ongoing dialogue with stakeholders is pivotal in mitigating biases and protecting privacy.

Human Responsibility:

Ethical AI underscores human responsibility in creating and deploying AI systems. The human element ensures that AI systems operate within ethical boundaries​ 2​.

Establishing clear lines of accountability, providing training for human overseers of AI systems, ensuring human-in-the-loop mechanisms in critical decision-making processes, encouraging ethical leadership, and promoting a culture of responsibility are steps to underscore human responsibility in the deployment and management of AI systems.

Promoting Digital Literacy:

Ensuring that educators and students have the necessary digital literacy to understand, interact with, and critically evaluate AI technologies is an integral part of ethical AI use in higher education ​1​.

Offering digital literacy courses, developing resources on digital rights and online safety, organizing digital literacy campaigns, establishing partnerships with tech companies for training, and creating platforms for sharing digital literacy resources are measures to foster digital literacy among educators and students.

Interdisciplinary Collaboration:

Addressing the ethical implications of AI in higher education necessitates interdisciplinary collaboration, drawing insights from ethics, law, social sciences, and other fields​ 3​.

Forming interdisciplinary committees, organizing cross-departmental workshops, encouraging joint research projects on AI ethics, establishing partnerships with other institutions, and promoting dialogues between academia, industry, and government are initiatives to foster interdisciplinary collaboration on ethical AI in higher education.

Promoting Ethical Practices in Classrooms:

Encouraging, practicing, and supporting the ethical and responsible use of AI in classrooms is pivotal, requiring multiple ways to think through the use of generative AI in teaching and learning​ 4​.

Integrating AI ethics discussions within course curricula, encouraging faculty to adopt ethical AI tools, organizing classroom discussions on real-world AI ethical dilemmas, encouraging students to undertake projects exploring the ethical dimensions of AI, and providing resources for teachers to explore ethical AI applications help in promoting ethical practices in classrooms.

Conclusion:

Exploring the ethical and responsible AI use in higher education requires examination of the topic from various disciplines. The concept of FATE—Fairness, Accountability, Transparency, and Ethics—guides this exploration. Ethics frameworks act as roadmaps in understanding this complex topic. Stakeholders representing diverse campus communities should be involved in designing and implementing AI policies and practices. 

References:


Leveraging ChatGPT’s Image Analysis for Enhanced Student Success: A Deep Dive into the 24×7 On-Demand Omni-channel Student Success Ecosystem

I finally got access to ChatGPT’s image analysis functionality today! I’ve seen social media posts about how users are finding ways to use this new feature. I wanted to know how ChatGPT can help me enhance the idea of a Student Success Support Model I introduced in this blog post. Here is the diagram of this concept.

The following is a series of prompts and ChatGPT responses to the concept/image above.

Prompt: Interpret the image.

The image depicts a diagram titled “24×7 On-Demand Omni-channel Student Success Ecosystem v2” created by Joe Sabado on 01/04/21. The diagram seems to be illustrating the various components and channels involved in a student success ecosystem. Here’s a breakdown:

  1. Communication Channels:
    • Platforms like Chatbot, Websites, Social Media have 24×7 availability.
    • Video Conferencing tools like Zoom and Microsoft Meeting.
    • Email, Text Messaging, Chat, Voice (phone), and Face-to-Face (on-premise) methods.
  2. Students are at the top, indicating they are the primary users or beneficiaries of the system. They interact with the various communication channels.
  3. Staff: Below the students are different staff roles that assist and interact with students:
    • Student Success Coaches
    • Case Managers who handle financial, mental/health issues.
    • General Staff
    • Faculty
  4. Tools & Technologies:
    • Personalized Self-Service: Appears to interact with a Digital Assistant (like Siri or Alexa).
    • Dashboard: Central interface for accessing various tools and data.
    • Customer Relationship Management (CRM): Software for managing interactions with students.
    • Data Analytics/Visualization: Offering descriptive, diagnostic, predictive, and prescriptive insights.
    • Data Lake (cloud): Centralized storage repository.
  5. Information Systems Integration via API: Various systems integrated for seamless data exchange:
    • Institutional Data (campus statistics)
    • Administrative Systems (e.g., Housing, Dining)
    • Learning Management System
    • Student Information System (SIS)
    • Academic Data (like Curriculum, grading)
    • Student Advising
    • Library
    • Student Engagement
    • Other Systems

Overall, the diagram showcases a comprehensive system that aims to provide round-the-clock support for students, integrating various tools, communication channels, and information systems. The design suggests a focus on student success, convenience, and holistic support.

Prompt: Suggest areas of gaps and opportunities for improvements.

  1. Single Point of Access: While there are multiple communication channels, it might be beneficial to create a unified portal or app where students can easily access all available resources, rather than navigating multiple platforms.
  2. Feedback Mechanism: There doesn’t seem to be a direct way for students to provide feedback on the services they receive. Integrating a feedback system can help in refining and enhancing the service quality.
  3. AI & Machine Learning: While there are chatbots and digital assistants, leveraging advanced AI and machine learning can further personalize student experiences, predict their needs, and proactively address potential issues.
  4. Integration of External Resources: There might be external resources or platforms (e.g., third-party study tools, online communities) beneficial for students. Consider integrating or partnering with them.
  5. Security & Privacy: With the integration of various systems and the use of a data lake, ensuring robust security measures and strict data privacy compliance is crucial.
  6. Professional Development for Staff: Regular training sessions for staff to familiarize them with new tools, methodologies, and best practices can enhance their efficiency and effectiveness.
  7. Offline Support: While there’s an emphasis on digital tools, ensuring that students who prefer offline or face-to-face interactions aren’t left behind is essential.
  8. Multilingual Support: Diverse student populations might benefit from multi-language support in chatbots, digital assistants, and other communication channels.
  9. Analytics & Reporting: The current analytics focus on descriptive, diagnostic, predictive, and prescriptive insights. Consider adding real-time analytics and creating dashboards that can highlight anomalies or areas of concern immediately.
  10. Accessibility: Ensure that all tools, especially digital platforms, are accessible to students with disabilities. This can include voice commands, screen readers, or other assistive technologies.
  11. Holistic Well-being: While there’s a mention of financial and mental/health support, consider adding more comprehensive well-being resources, including physical health, career counseling, and social well-being.
  12. Onboarding & Training: For new students or those not tech-savvy, an onboarding process or training modules on how to use the ecosystem can be beneficial.
  13. Continuous Improvement: Establishing a dedicated team or system for continuous assessment and improvement of the ecosystem will ensure it remains up-to-date and meets students’ evolving needs.

Remember, while these are suggestions based on the diagram, the real effectiveness and feasibility of these improvements would depend on the specific context, budget, and infrastructure of the institution or organization implementing this ecosystem.

Prompt: Suggest opportunities for effective application of Generative AI in this concept.

  1. Personalized Learning Paths: Generative AI can analyze students’ past performances, strengths, and areas of improvement to generate custom-tailored study plans or curriculum pathways. This can help each student focus on what’s most beneficial for their academic growth.
  2. Dynamic Content Creation: For topics where students frequently request more information or clarification, Generative AI can produce additional study materials, summaries, or explainer videos, thus enhancing the learning resources available.
  3. Simulated Scenarios: For courses that involve problem-solving, Generative AI can create an array of unique problems or case studies, allowing students to practice more diversely.
  4. Virtual Assistants: Beyond basic chatbot functions, Generative AI can be utilized to develop more sophisticated virtual assistants that can generate responses in natural language, assisting students with complex queries or creating interactive learning experiences.
  5. Automated Essay Scoring and Feedback: Generative AI can evaluate student essays or assignments, providing not just scores but also constructive feedback on areas of improvement, structure, and coherence.
  6. Enhanced Digital Interactions: Generative AI can produce realistic avatars or digital humans for more engaging virtual office hours or counseling sessions, especially when staff is unavailable.
  7. Predictive Analytics: Generative models can be used to simulate potential future scenarios based on current trends, helping institutions anticipate challenges or areas where intervention might be needed.
  8. Customized Communication: By analyzing student interactions and preferences, Generative AI can craft personalized messages, reminders, or notifications, making communication more relevant and engaging.
  9. Collaborative Projects: Generative AI can be used to create virtual environments or scenarios where students can collaborate, mimicking real-world challenges and tasks related to their field of study.
  10. Creative Arts & Design: For courses in the arts, design, or media, Generative AI can be employed to suggest designs, compose music, or even co-create artworks, providing students with innovative tools to explore their creativity.
  11. Language Translation: Generative AI can assist in real-time translation, ensuring that non-native speakers can access content in their preferred language, thus improving inclusivity.
  12. Cultural Adaptation: For institutions with a diverse student body, Generative AI can be trained to understand cultural nuances and adapt content or communication to be more culturally sensitive and relevant.
  13. Continual Learning: The system can leverage Generative AI to keep evolving its knowledge base, ensuring that it remains current with the latest research, trends, and student needs.

Integrating Generative AI into the student success ecosystem can significantly enhance the quality of education and support offered. However, it’s crucial to approach its application ethically, ensuring data privacy, minimizing biases, and maintaining transparency with students about how their data is used.

Prompt: What student-success, technology adoption, pedagogical, and other frameworks should be considered in implementing the concept in the diagram?

  1. Student-Success Frameworks:
    • Tinto’s Model of Student Retention: Focuses on the interplay of social integration, academic integration, and student persistence.
    • Astin’s Theory of Involvement: Emphasizes the significance of student involvement in academic and extracurricular activities for success.
  2. Technology Adoption Frameworks:
    • Technology Acceptance Model (TAM): Assesses how users come to accept and use a technology based on perceived usefulness and perceived ease of use.
    • Unified Theory of Acceptance and Use of Technology (UTAUT): Provides a unified view of user acceptance of technology, combining elements from multiple models.
  3. Pedagogical Frameworks:
    • Constructivist Theory: Prioritizes learner-centered approaches where students actively construct their own knowledge.
    • Bloom’s Taxonomy: Classifies learning objectives into hierarchical levels from remembering facts to creating new concepts.
    • Universal Design for Learning (UDL): Focuses on creating curricula that meets the needs of all learners from the start.
  4. Data and Analytics Frameworks:
    • Learning Analytics: Utilizes detailed data collection and analysis to understand and optimize learning and the environments in which it occurs.
    • Predictive Analytics for Student Success (PASS): Uses historical data to predict future outcomes, such as student dropout risk.
  5. Implementation and Change Management Frameworks:
    • Kotter’s 8-Step Change Model: Provides a structured approach to leading organizational change.
    • ADKAR (Awareness, Desire, Knowledge, Ability, Reinforcement): Focuses on the people side of change to ensure successful adoption.
  6. Stakeholder and Communication Frameworks:
    • RACI (Responsible, Accountable, Consulted, Informed): Clarifies roles and responsibilities during a project or process.
    • Stakeholder Analysis Matrix: Assesses the interest, influence, and impact of stakeholders.
  7. Integration and Interoperability Frameworks:
    • Enterprise Service Bus (ESB): Aims to integrate different systems and applications in an organization.
    • API-first Design: Designs the API specifications first before any other code, ensuring seamless integration between systems.
  8. Security and Privacy Frameworks:
    • ISO/IEC 27001: Offers requirements for an information security management system.
    • FERPA: U.S. federal law that protects the privacy of student education records.

URLs for references:

Using ChatGPT’s image analysis feature as I did in this post is another example of how Gen AI can be used to ideate existing ideas.

What other use cases in higher education can you think of?


Educause Annual Conference – Generative AI Sessions Themes

What are the themes of the 20 Generative AI/ChatGPT sessions at the Educause Conference 2023? As a higher education IT professional and one who has been exploring the uses and implications of GenAI in higher education, I was curious about what to expect. I will attend the national conference, so I am excited about what to expect. I prompted ChatGPT to analyze the themes for the following sessions; the results are below.  Here are the sessions with descriptions.

Core Themes:

  1. AI Ethics and Equity: A recurring theme, focusing on the moral and fairness aspects of AI.
  2. AI in Education: Examines the transformative potential of AI in teaching and learning.
  3. AI Strategy and Leadership: Discusses the integration of AI into institutional strategies.
  4. AI in Technical and Operational Efficiency: Looks at how AI can streamline various institutional processes.
  5. AI’s Current and Future Impact: Evaluates both the immediate and long-term effects of AI in education.

Unique Themes:

  1. AI and Extended Reality: Explores the intersection of AI with virtual and augmented reality.
  2. AI in Assessment: Discusses the role of AI in academic assessments and integrity.
  3. AI as a Collaborative Tool: Views AI not just as a tool but as a partner in the educational process.

I wanted to go further and prompted ChatGPT to analyze themes for each session. Here are the results.

Monday, October 9, 2023

  • AWS Powerful Partnerships Built on Data Get Results
    • Theme: Data-Driven Decision Making
      • Description: Utilizing data analytics and AI to solve challenges in higher education.
    • Theme: Cross-Institutional Collaboration
      • Description: Building effective teams across institutions to foster innovation.
  • Generative AI as a Deep Agent of Chaos for Higher Education
    • Theme: AI in Education
      • Description: The impact of AI on various aspects of higher education.
    • Theme: Futurism
      • Description: Forecasting the future impact of AI on higher education.

Tuesday, October 10, 2023

  • Generative AI Implications for the Future
    • Theme: AI Ethics and Equity
      • Description: Discussing ethical and equity issues related to AI.
    • Theme: AI Research and Development
      • Description: Proposed research agenda for understanding AI’s impact.
  • Generative AI: 5 Questions for Higher Ed CIOs
    • Theme: AI Adoption Strategies
      • Description: Different approaches to adopting AI in educational institutions.
    • Theme: IT Efficiency
      • Description: Leveraging AI for operational efficiency.
  • Generative AI: the Good, the Bad and the Bias
    • Theme: AI Ethics
      • Description: Ethical considerations in the deployment of AI technologies.
    • Theme: AI Limitations
      • Description: Understanding the limitations of AI in practical applications.
  • EDUCAUSE Leadership Series
    • Theme: AI in Leadership
      • Description: How leaders can navigate the challenges and opportunities of AI.
    • Theme: AI in Pedagogy
      • Description: Educational opportunities related to AI.
  • Boom or Bust? The Future of Generative AI in Higher Education
    • Theme: AI Impact Assessment
      • Description: Debating the potential long-term impacts of AI on higher education.
  • Generative AI: The Future of Learning in Higher Education is Now
    • Theme: AI in Learning and Development
      • Description: How AI can personalize learning and improve efficiency.
  • Beyond ChatGPT: Exploration of AI in Higher Education
    • Theme: AI Preparedness
      • Description: Preparing educational institutions for the impact of AI.
    • Theme: AI Pros and Cons
      • Description: Weighing the benefits and drawbacks of AI in education.
  • Intersection of Artificial Intelligence and Extended Reality
    • Theme: AI and Extended Reality
      • Description: The synergistic effects of AI and virtual reality in education.

Wednesday, October 11, 2023

  • Assessment in the Age of Generative AI
    • Theme: AI in Assessment
      • Description: The impact of AI on educational assessments.
    • Theme: Academic Integrity
      • Description: Concerns about authenticity and integrity in AI-driven assessments.
  • From Innovation to Transformation
    • Theme: AI-Driven Pedagogy
      • Description: Innovative teaching strategies using AI.
  • Risky Business: Can Workflow and Artificial Intelligence Bridge the Enrollment Cliff?
    • Theme: AI in Enrollment
      • Description: Using AI to improve student enrollment processes.
    • Theme: Budget Management
      • Description: Financial considerations in AI adoption.
  • AI Together: Developing Usage Policy for Generative AI with Students
    • Theme: AI Policy Development
      • Description: Creating guidelines for AI usage in educational settings.
  • Generative AI Is More than a Tool- It’s a Digital Collaborator
    • Theme: AI as a Collaborative Tool
      • Description: Viewing AI as a partner in the educational process.
  • The End of Business as Usual: Embracing Generative AI within Higher Ed
    • Theme: AI Strategy and Leadership
      • Description: Integrating AI into institutional strategies and leadership.
  • Bias, Stereotypes, and Hallucinations in Generative AI
    • Theme: AI Bias and Stereotypes
      • Description: Addressing issues of bias in AI outputs.
  • Generative AI in Higher Education: Implications for the Present and Future
    • Theme: AI’s Current and Future Impact
      • Description: Assessing the immediate and long-term effects of AI in education.
  • The Future of Generative AI: Take Action Today
    • Theme: AI Future Planning
      • Description: Preparing for the future impact of AI in education

Thursday, October 12, 2023

  • Generative Pre-Trained Transformer: A 2023 Odyssey in Technical Support
    • Theme: AI in Technical Support
      • Description: Using AI to enhance IT support services.

Analyzing themes of conference sessions is yet another use case for ChatGPT. I could have also gone further to do some research on the presenters, their work and stance on Generative AI, and generate some questions to ask at their sessions like I did in this blog post.

I hope to see you at the conference to share ideas on Generative AI and Higher Education or let’s connect via email at joepsabado@gmail.com.


Applying ChatGPT in the Higher Education IT Service Catalog: Enhancing Service Delivery with AI

The higher education IT service catalog was developed in response to the challenges posed by the diverse and changing user populations, mixed central and distributed services, and the collaborative nature of the sector.

https://www.educause.edu/working-groups/papers/2019/the-higher-education-it-service-catalog-second-edition/the-higher-education-it-service-catalog-model

Furthermore, the catalog serves as a framework to identify potential uses of ChatGPT or other Generative AI to enhance the quality of IT service delivery. The document provides potential uses of ChatGPT for each service area, enabling institutions to leverage AI technology for improved service provision.

https://docs.google.com/document/d/1vypjTMDClqloWDGkkZHiLkvNVvIp7KjLT7r39GdROu8/edit?usp=sharing


In Their Shoes: How Understanding Users Enhances Impact of IT in Higher Education

As IT leaders and practitioners in higher education, it is essential in our roles to have a fundamental understanding of those impacted by the services and systems we provide. This understanding is a cornerstone for enhancing user experiences, enabling successes, and fostering the achievement of institutional goals. In addition, adopting a framework that ensures equitable access to resources and incorporates fairness into every process is essential.

A faculty member inspired me with the idea of viewing the impact of the work we do in IT and thought, “Joe, it would be nice to understand the experiences of our staff and students so we have some insights into their challenges and we can find opportunities ourselves to assist them. I would like to know that my student employees are getting paid on time and that I can assist a student who may have issues outside the classroom by knowing who to contact in student affairs.

An international student (female Indian) also shared a frustrating and heartbreaking experience. I spoke with her about her experience on campus, specifically related to IT. She said, “I am a Computer Science Ph.D. student about to graduate. I feel like an impostor sometimes because, throughout my time here, I couldn’t even provision a computer to do my research at this university. The systems are so locked down I have to ask for an IT staff to create a virtual system on my behalf.” She also mentioned that when she finally got access to systems, she didn’t have instructions on installing software to conduct her work and had to figure it out independently.

Working for a campus that is a minority-serving institution (MSI), designated as Hispanic Serving Institution and Asian American Native American Pacific Islander Serving Institution (AANAPISI), as IT providers, we must keep this commitment to diversity, equity, inclusion, and belonging at the forefront of the solutions we implement and in how we provide our services. Access to digital technology as a basic need came to light during the pandemic when our students were required to attend courses and conduct research away from campus. The digital divide became apparent early on when we had to provision Chromebooks and assist students in finding affordable internet access in their local residences. 

The experiences above led me to ask this guiding question:

How does information technology positively impact the success of the university by providing the utmost support for staff, faculty, students, and administrators along their life cycles and touchpoints with the university, and in alignment with their unique desires and needs (“Jobs To Be Done”),  to help them achieve their goals and contribute to the overall mission of the university?

Understanding our users involves mapping out the desires, needs, aspirations, and success parameters of various university constituents, namely undergraduate and graduate students, staff, faculty, researchers, and administrators. A general understanding of the unique lifecycles and university touch points for the various community members is essential for delivering IT services and solutions that positively contribute to the experience and success of each of these communities. It is imperative, however, to view this understanding through a lens that places equity and fairness at the forefront. This approach aspires to assure every user, regardless of their backgrounds, roles, or circumstances, experiences equal and fair access to technology resources and support. IT providers must align our procedures and policies with these principles, strengthening the commitment to equal opportunities within the university communities.

Incorporating user understanding has numerous benefits for IT providers and those we serve. This user-centric approach assists us in designing and delivering technology solutions that appropriately meet user requirements, ensuring technology supports users’ needs and purposes and enhancing their overall experience. User understanding contributes to increased adoption and engagement, enabling the creation of solutions that resonate with users’ needs and preferences. Finally, understanding users’ backgrounds, contexts, and goals empowers IT providers to communicate and offer support more effectively.

As IT providers, we must incorporate social justice principles into our approach. We should strive for equitable access, ensuring all users have equal access to technology resources and support, promoting inclusivity and social justice. We should aim for transparency in decision-making processes and uphold data privacy and security to maintain trust and respect the rights of individuals within the university community.

By weaving together user understanding, knowledge of their lifecycles, university touchpoints, and social justice principles, we can enhance our effectiveness in contributing to our user communities’ success. We can have a better chance to offer tailored, inclusive, and equitable technology solutions, fostering a culture of fairness and justice. In adopting these principles, we in IT can realize the potential values and provide maximum contributions to the university.

The following are high-level representations of each community’s lifecycles, touchpoints, desires and needs, and success metrics. Included as well is a campus perspective.

As an IT leader, our commitment to social justice including diversity, equity, and inclusion and understanding those we serve are through is through our consistent actions, learning from experience and education, and explicitly stating our commitments. For IT leaders, Educause provides DEI resources and a commitment to diversity statement. Here is my personal commitment. Furthermore, understanding our user communities is by volunteering in non-technical roles such as reading admissions applications, mentoring students and staff, teaching introduction courses, and serving on search committees. Here are some of the volunteer work I have committed to through the years. It also includes actively supporting our staff to do the same.

By understanding the unique needs of our diverse user communities and incorporating principles of social justice into our approach, IT providers in higher education can play a vital role in promoting an inclusive and fair university environment. This approach helps to create tailored, equitable technology solutions that empower users to achieve their goals, fostering a culture of fairness and justice within the university community.

The images above can be downloaded here:

Note: Concept by Joe Sabado and content of the lifecycles, desires and needs, and success metrics primarily provided by ChatGPT and other sources.


Unveiling ChatGPT’s Event Mastery: Prepping, Engaging, and Extending Your Experience

This blog post provides another use case for ChatGPT – to prepare for the event beforehand by having information about the topic and the panelists and to generate potential questions at the event. ChatGPT can also create a course syllabus based on the topics covered at the event.

UCSB will host an event called Unraveling the Secrets of ChatGPT: A Panel for Scholars and Educators on May 19th, 2023. A description of the event on the webpage states

Decoding ChatGPT has become a central topic of interest at the UCSB campus, prompting numerous discussions on its research, educational, and societal impacts. As educators, researchers, and society members, it’s crucial to delve deeper into understanding ChatGPT’s functionality, limitations, and potential implications. This panel discussion will bring together UCSB experts in natural language processing, as well as scholars examining the ethical integration of AI technology in education and its broader societal ramifications. Join us as we explore the future of ChatGPT and its role in shaping learning experiences and our society at large.

Prompt: Provide potential topics covered in this event based on the information provided on this page. https://ml.ucsb.edu/unraveling-secrets-chatgpt

Based on the information provided on the page, the event “Unraveling the Secrets of ChatGPT” at the UCSB Center for Responsible Machine Learning could potentially cover the following topics:

  1. Understanding ChatGPT: This topic will likely delve into the functionality and limitations of ChatGPT. It might also explore the potential implications of this AI model. The talk titled “What do we know about ChatGPT?” by William Wang, Director of the Center for Responsible Machine Learning, will likely cover this topic.
  2. ChatGPT in Education: The panel discussion includes Lisa Berry, a Senior Instructional Consultant in Instructional Development. This suggests that the event might discuss how AI technology like ChatGPT can be integrated into educational settings and how it can shape learning experiences.
  3. Ethical Considerations of AI: The event description mentions the ethical integration of AI technology. This could mean discussions around the ethical implications of using AI models like ChatGPT in various sectors, including education and society at large.
  4. Societal Impacts of ChatGPT: The event aims to explore the societal ramifications of ChatGPT. This could involve discussions on how AI models like ChatGPT are influencing society, potentially changing the way we communicate, learn, and interact.
  5. Natural Language Processing (NLP): Given that the panel includes experts in natural language processing, it’s likely that there will be discussions around the NLP techniques used in ChatGPT and how advancements in this field are contributing to the development of AI models like ChatGPT.
  6. Future of ChatGPT: The event aims to explore the future of ChatGPT. This could involve discussions on potential improvements, new applications, and the long-term vision for AI models like ChatGPT.

In the section titled “Understanding ChatGPT”, the following topics could be potentially covered:

  1. Introduction to ChatGPT: A brief overview of ChatGPT, its purpose, and its capabilities. This could include a discussion on how it’s used in various applications and its significance in the field of AI.
  2. Architecture of ChatGPT: An exploration of the technical aspects of ChatGPT, including its underlying architecture, the GPT model, and how it processes and generates text.
  3. Training Process: A detailed explanation of how ChatGPT is trained, including the data it’s trained on, the training process, and how it learns to generate human-like text.
  4. Functionality and Features: A discussion on the key features and functionalities of ChatGPT. This could include its ability to generate creative text, answer questions, write essays, and more.
  5. Limitations of ChatGPT: An honest discussion about the limitations and challenges of ChatGPT. This could include its inability to fact-check information, potential biases in the model, and its occasional generation of nonsensical or inappropriate responses.
  6. Safety Measures: An overview of the safety measures implemented in ChatGPT to prevent misuse and to ensure it generates appropriate and safe content.
  7. Use Cases of ChatGPT: Real-world examples and case studies of how ChatGPT is being used in various sectors like education, customer service, content creation, and more.
  8. Comparisons with Other AI Models: A comparison of ChatGPT with other similar AI models, highlighting its unique features and advantages.

In the section potentially focusing on “ChatGPT in Education”, the following topics could be covered:

  1. AI in Education: An overview of how AI, and specifically ChatGPT, is being integrated into educational settings. This could include its use in tutoring, grading, and providing feedback.
  2. ChatGPT as a Learning Tool: A discussion on how ChatGPT can be used as a learning tool to aid students in various subjects. This could include its ability to provide explanations, answer questions, and generate educational content.
  3. Personalized Learning with ChatGPT: An exploration of how ChatGPT can be used to provide personalized learning experiences. This could involve its ability to adapt to a student’s learning style and pace, and provide tailored educational content.
  4. ChatGPT in Online Learning: A discussion on the role of ChatGPT in online learning platforms. This could include its use in discussion forums, virtual classrooms, and as a tool for student engagement.
  5. Limitations and Challenges: An honest discussion about the limitations and challenges of using ChatGPT in education. This could include issues of accuracy, potential biases, and the need for human oversight.
  6. Ethical Considerations: A discussion on the ethical considerations of using AI like ChatGPT in education. This could include issues of data privacy, fairness, and the potential impact on employment in the education sector.
  7. Future of AI in Education: A look at the potential future of AI in education, including the role of models like ChatGPT. This could involve discussions on upcoming advancements, potential new applications, and the long-term vision for AI in education.
  8. Case Studies: Real-world examples of how ChatGPT is being used in educational settings, including success stories and lessons learned.

In the section potentially focusing on “Ethical Considerations of AI”, the following topics could be covered:

  1. AI Ethics Overview: An introduction to the field of AI ethics, discussing why it’s important and what it encompasses.
  2. Bias in AI: A discussion on how biases can be introduced into AI systems like ChatGPT, the potential impacts of these biases, and strategies for mitigating them.
  3. Privacy and AI: An exploration of privacy concerns related to the use of AI, including how data is collected, stored, and used in systems like ChatGPT.
  4. Transparency and Explainability: A discussion on the importance of transparency and explainability in AI systems, and the challenges in achieving these with complex models like ChatGPT.
  5. Accountability and AI: A conversation about who should be held accountable when an AI system causes harm or makes a mistake, and how accountability can be ensured.
  6. Ethical Use of ChatGPT: A specific look at the ethical considerations related to the use of ChatGPT, including its potential misuse and the safeguards in place to prevent this.
  7. Regulation of AI: A discussion on the current state of AI regulation, potential future regulations, and how these might impact the use of systems like ChatGPT.
  8. Ethics in AI Education and Research: A conversation about the importance of incorporating ethics into AI education and research, and how this is being done at institutions like UCSB.
  9. Case Studies: Real-world examples of ethical dilemmas in AI, and how they were or could be addressed.

In the section potentially focusing on “Societal Impacts of ChatGPT”, the following topics could be covered:

  1. AI and Society: An overview of how AI, and specifically ChatGPT, is impacting society. This could include its influence on communication, work, and social interactions.
  2. ChatGPT in Various Industries: A discussion on how ChatGPT is being used in various industries such as customer service, content creation, entertainment, and more. This could include its benefits and potential drawbacks in these contexts.
  3. AI and Employment: An exploration of how AI like ChatGPT is affecting the job market. This could include discussions on job displacement, the creation of new roles, and the need for upskilling and reskilling.
  4. AI and Accessibility: A discussion on how AI like ChatGPT can improve accessibility, such as providing assistance for those with disabilities or language barriers.
  5. AI and Culture: A conversation about how AI is influencing culture, including the way we communicate, consume content, and interact with technology.
  6. Ethical and Societal Considerations: A discussion on the ethical and societal considerations of using AI like ChatGPT. This could include issues of privacy, bias, and the digital divide.
  7. Future Societal Impacts: A look at the potential future societal impacts of AI and models like ChatGPT. This could involve discussions on upcoming advancements, potential new applications, and the long-term vision for AI in society.
  8. Case Studies: Real-world examples of how ChatGPT and similar AI models are impacting society, including both positive and negative impacts.

In the section potentially focusing on “Natural Language Processing (NLP)”, the following topics could be covered:

  1. Introduction to NLP: An overview of what Natural Language Processing is, its importance, and its applications in various fields.
  2. NLP Techniques in ChatGPT: A discussion on the specific NLP techniques used in ChatGPT, such as tokenization, attention mechanisms, and transformers.
  3. Evolution of NLP Models: A look at how NLP models have evolved over time, leading up to advanced models like GPT-3.
  4. Challenges in NLP: An exploration of the challenges faced in NLP, such as understanding context, handling ambiguity, and dealing with multiple languages.
  5. NLP and AI Ethics: A discussion on the ethical considerations specific to NLP, such as biases in language data and the potential misuse of text generation technology.
  6. Future of NLP: A look at the potential future of NLP, including upcoming advancements, potential new applications, and the long-term vision for NLP in AI.
  7. NLP in Education and Research: A discussion on the role of NLP in education and research, including how it’s being taught and the current areas of focus in NLP research.
  8. Case Studies: Real-world examples of how NLP, and specifically models like ChatGPT, are being used in various applications, including success stories and lessons learned.

In the section potentially focusing on “Future of ChatGPT”, the following topics could be covered:

  1. Advancements in ChatGPT: A discussion on the potential advancements in the ChatGPT model, including improvements in its capabilities, accuracy, and safety measures.
  2. New Applications of ChatGPT: An exploration of potential new applications for ChatGPT in various sectors such as healthcare, finance, entertainment, and more.
  3. ChatGPT and Emerging Technologies: A look at how ChatGPT could be integrated with other emerging technologies like virtual reality, augmented reality, and the Internet of Things (IoT).
  4. Long-term Vision for ChatGPT: A discussion on the long-term vision for ChatGPT, including its potential impact on society and how it might shape the future of AI.
  5. ChatGPT and AI Policy: A conversation about the potential future regulations that could impact the use and development of AI models like ChatGPT.
  6. Sustainability of AI Models: A discussion on the sustainability of AI models like ChatGPT, including their energy consumption and environmental impact.
  7. ChatGPT in the Global Context: A look at how ChatGPT could be used and developed in different parts of the world, considering cultural, linguistic, and socioeconomic differences.
  8. Case Studies: Future-oriented case studies or scenarios showcasing how advancements in ChatGPT could impact various sectors and society at large.

Prompt: Provide background information of William Wang, Director, Center for Responsible Machine Learning; Mellichamp Professor of Artificial Intelligence; Co-Director, UCSB NLP Group and their potential views on the topic of this event: https://ml.ucsb.edu/unraveling-secrets-chatgpt

William Wang is the Director of the Center for Responsible Machine Learning, the Mellichamp Professor of Artificial Intelligence, and the Co-Director of the UCSB NLP Group at the University of California, Santa Barbara (UCSB). He has a strong background in machine learning, natural language processing, and artificial intelligence, which is evident from his numerous publications and research projects in these areas.

He received his Ph.D. from the School of Computer Science, Carnegie Mellon University, and has been recognized with several awards, including the DARPA Young Faculty Award, the IJCAI Early Career Spotlight, an NSF CAREER Award, and a Sloan Research Fellowship. His research interests include machine learning (ML), natural language processing (NLP), and social media.

Given his extensive work in machine learning and natural language processing, it’s likely that he has a deep understanding of models like ChatGPT. His role as the Director of the Center for Responsible Machine Learning suggests that he is particularly interested in the ethical implications of AI and ML, including issues like fairness, accountability, and transparency.

In the context of the event “Unraveling the Secrets of ChatGPT,” it’s likely that he would be interested in discussing both the technical aspects of how ChatGPT works and the broader implications of its use. This could include topics like how the model is trained, how it generates responses, and how it can be used responsibly to avoid potential issues like bias or misinformation.

Please note that this is a general interpretation based on available information about William Wang’s professional background and interests. For more specific insights into his views, it would be best to refer to his own statements or writings.

Prompt: Provide background information of Lei Li, Co-Director, UCSB NLP Group; Assistant Professor of Computer Science and their potential views on the topic of this event: https://ml.ucsb.edu/unraveling-secrets-chatgpt

Lei Li is an Assistant Professor in the Computer Science Department at the University of California, Santa Barbara (UCSB) and serves as the Co-Director of the UCSB NLP Group. He has a strong background in natural language processing, machine learning, and data mining.

His research interests are quite diverse and include machine translation, speech translation, multilingual NLP, text generation and summarization, reasoning and question answering, information extraction, AI for drug discovery, green and efficient ML, time series mining and prediction, and probabilistic inference and Bayesian sampling methods.

Lei Li received his Ph.D. from Carnegie Mellon University and his Bachelor’s degree from Shanghai Jiao Tong University. He has been involved in numerous research projects and has published several papers in his areas of interest.

Given his extensive background in natural language processing and machine learning, it’s likely that Lei Li has a deep understanding of models like ChatGPT. His work in areas such as machine translation, text generation, and reasoning could provide valuable insights into the technical aspects of how ChatGPT works.

In the context of the event “Unraveling the Secrets of ChatGPT,” Lei Li might be interested in discussing the technical aspects of how ChatGPT works, including how it handles machine translation and text generation. He might also be interested in discussing the broader implications of its use, including how it can be used responsibly and how it might be integrated with other technologies or applications.

Please note that this is a general interpretation based on available information about Lei Li’s professional background and interests. For more specific insights into his views, it would be best to refer to his own statements or writings.

Prompt: Provide background information of Lisa Berry, Senior Instructional Consultant, Instructional Development and their potential views on the topic of this event: https://ml.ucsb.edu/unraveling-secrets-chatgpt

Lisa Berry is a Senior Instructional Consultant in Instructional Development at the University of California, Santa Barbara (UCSB). She works individually with faculty members to enhance their courses, plans campus-wide teaching events and workshops, and directs the TA Development Program. She has broad expertise in pedagogy, course/curriculum design, educational technologies, assessment, and evaluation of teaching. Some of her particular interests include creating authentic learning experiences, positioning students as co-creators of knowledge, and leveraging technology to make large classes feel small. Before her ten years as an Instructional Consultant, she designed and taught courses for the Environmental Studies Department at UCSB. She also teaches a course for graduate students interested in a career in college and university teaching, where students gain experience in applying pedagogical theories and evidence-based teaching into course design, implementation, assessment, and evaluation (source).

Given her background and interests, Lisa Berry might bring a unique perspective to the event, focusing on the pedagogical implications of AI like ChatGPT. She might discuss how AI can be used to enhance teaching and learning experiences, how it can be integrated into course design and curriculum, and how it can be used to create more engaging and interactive learning environments. She might also explore the ethical considerations of using AI in education, including issues of privacy, fairness, and accessibility.

Prompt: Provide background information of Rita Raley, Professor of English at UCSB and their potential views on the topic of this event: https://ml.ucsb.edu/unraveling-secrets-chatgpt

Rita Raley is a Professor of English at the University of California, Santa Barbara (UCSB). She has a strong background in digital humanities, global Englishes, and contemporary literature. Her research interests lie in the intersection of digital media and humanist inquiry, with a particular emphasis on cultural critique, artistic practices, language, and textuality.

She is the author of “Tactical Media,” a book that explores the intersection of art, critical theory, and new media, and she has published numerous articles on digital poetics, dataveillance, and interventionist art practices.

Given her background and research interests, it’s likely that she would approach the topic of ChatGPT from a critical perspective, examining its implications for language, literature, and digital culture. She might explore questions around how AI like ChatGPT is changing our understanding of textuality and authorship, and how it fits into broader trends in digital media and culture.

Prompt: Provide background information of Norah Dunbar, Professor of Communication at UCSB and their potential views on the topic of this event: https://ml.ucsb.edu/unraveling-secrets-chatgpt

Norah Dunbar is a Professor of Communication at the University of California, Santa Barbara (UCSB). She received her Ph.D. in Communication from the University of Arizona in 2002. Her research interests include deception detection, credibility assessment, and the use of games for learning and assessment. She has conducted research on nonverbal and verbal cues to deception, the effect of suspicion on detection accuracy, and the use of immersive virtual environments to study deceptive behavior.

She is also the current Director of the Center for Information Technology and Society at UCSB. This center focuses on understanding technology’s impact on society and individual behavior, which aligns with her interest in the use of games and virtual environments for learning and assessment.

Given her research interests and her role at the Center for Information Technology and Society, it’s likely that she would approach the topic of ChatGPT from a communication and societal impact perspective. She might discuss how AI like ChatGPT can influence communication, the potential for deception or misinformation, and the societal implications of such technology.

However, to get a more accurate understanding of her views on the topic, it would be best to refer to her specific statements or writings on the subject.

Prompt: Provide background information of Joe Walther, Director, Center for Information Technology & Society; Distinguished Professor; Bertelsen Presidential Chair in Technology and Society and their potential views on the topic of this event: https://ml.ucsb.edu/unraveling-secrets-chatgpt

Joe Walther is a Distinguished Professor of Communication and holds the Bertelsen Presidential Chair in Technology and Society at the University of California, Santa Barbara (UCSB). He is also the Director of the Center for Information Technology and Society at UCSB. His research focuses on the social and interpersonal dynamics of computer-mediated communication, which has influenced the study of relationships, online groups, and interethnic interaction. He has developed original theories on these topics, including the social information processing theory of mediated communication, the hyperpersonal model of online communication (by which people experience exceptionally intense emotions and connections), and warranting theory, among others. He is currently extending his work into new explanations for online hate (source).

Given his research interests and his role at the Center for Information Technology and Society, it’s likely that he would approach the topic of ChatGPT from a communication and societal impact perspective. He might discuss how AI like ChatGPT can influence communication, the potential for deception or misinformation, and the societal implications of such technology. He might also explore the emotional and interpersonal dynamics that can arise in interactions with AI like ChatGPT, given his work on the hyperpersonal model of online communication.

However, to get a more accurate understanding of his views on the topic, it would be best to refer to his specific statements or writings on the subject.

Prompt: Provide 5 questions for each of the topic and consider the background of the potential panelists who may present on the topic.

Section 1: Introduction to ChatGPT

  1. Can you explain the basic architecture of ChatGPT and how it works?
  2. What are the key differences between ChatGPT and other language models?
  3. How has the development of ChatGPT evolved over time?
  4. What are some of the most significant challenges in developing and training a model like ChatGPT?
  5. How does ChatGPT handle different languages and cultural contexts?

Section 2: Technical Aspects of ChatGPT

  1. Can you delve into the technical details of how ChatGPT generates its responses?
  2. How does ChatGPT handle long-term dependencies in text?
  3. What are the computational requirements for training and running ChatGPT?
  4. How does ChatGPT handle ambiguity in language?
  5. Can you discuss the role of reinforcement learning in ChatGPT’s training process?

Section 3: Ethical Considerations of ChatGPT

  1. What are the main ethical concerns associated with the use of ChatGPT?
  2. How does OpenAI address potential misuse of ChatGPT?
  3. Can you discuss the potential for bias in ChatGPT’s responses and how it’s mitigated?
  4. How does ChatGPT handle sensitive or controversial topics?
  5. What measures are in place to prevent ChatGPT from generating harmful or inappropriate content?

Section 4: Applications of ChatGPT

  1. Can you provide examples of real-world applications where ChatGPT is currently being used?
  2. How can ChatGPT be used in educational settings?
  3. What potential does ChatGPT have in the field of customer service?
  4. How can ChatGPT be used to assist in creative writing or content generation?
  5. Can you discuss any potential applications of ChatGPT in the field of healthcare?

Section 5: Future of ChatGPT and AI

  1. What are the future plans for the development of ChatGPT?
  2. How do you see AI like ChatGPT evolving in the next 5-10 years?
  3. What impact do you think ChatGPT and similar models will have on society in the long term?
  4. What are the potential implications of ChatGPT for the job market?
  5. How can we ensure that the benefits of AI like ChatGPT are distributed equitably across society?

Section 6: Q&A

  1. How does ChatGPT handle languages other than English?
  2. Can you discuss any interesting or unexpected outcomes from the development or use of ChatGPT?
  3. How does ChatGPT handle humor or sarcasm?
  4. What are some of the limitations of ChatGPT and how might they be addressed in future versions?
  5. Can you share any insights from user feedback on ChatGPT?

These questions are designed to cover a broad range of topics related to ChatGPT, from technical details to ethical considerations and future implications. They should provide a comprehensive overview of the subject and prompt interesting discussions from the panelists.

The event seems to cover topics that can be explored, like a course. So, I prompted ChatGPT to create a syllabus using this prompt:

Suggest a course syllabus based on the topics discussed above. Include learning outcomes and materials. The course is 10 weeks. Include formative and summative assessments.

Course Title: Exploring ChatGPT: Technical, Ethical, and Societal Perspectives

Course Duration: 10 Weeks

Course Description: This course provides an in-depth exploration of ChatGPT, a state-of-the-art language model developed by OpenAI. Students will gain a comprehensive understanding of the technical aspects of ChatGPT, its applications, ethical considerations, and societal impacts. The course will also look ahead to the future of AI like ChatGPT.

Learning Outcomes: By the end of this course, students will be able to:

  1. Understand the technical architecture and operation of ChatGPT.
  2. Discuss the ethical considerations and potential misuse of AI like ChatGPT.
  3. Explore the applications of ChatGPT in various sectors.
  4. Understand the societal impacts of AI like ChatGPT.
  5. Discuss the future of AI and models like ChatGPT.

Course Schedule:

Week 1-2: Introduction to ChatGPT

  • Overview of ChatGPT
  • Basic architecture and operation
  • Evolution and development of ChatGPT
  • Reading: OpenAI’s ChatGPT papers and blog posts
  • Formative Assessment: Quiz on ChatGPT basics

Week 3-4: Technical Aspects of ChatGPT

  • Deep dive into the technical workings of ChatGPT
  • Understanding language processing and generation in ChatGPT
  • Reading: Technical papers on ChatGPT and related language models
  • Formative Assessment: Group discussion and presentation on a technical aspect of ChatGPT

Week 5-6: Ethical Considerations of ChatGPT

  • Ethical concerns and potential misuse of ChatGPT
  • Bias and fairness in AI
  • Reading: Articles and papers on AI ethics
  • Formative Assessment: Case study analysis on an ethical issue related to ChatGPT

Week 7-8: Applications and Societal Impacts of ChatGPT

  • Real-world applications of ChatGPT
  • Societal impacts of AI like ChatGPT
  • Reading: Case studies on ChatGPT applications and societal impacts
  • Formative Assessment: Project on designing a novel application of ChatGPT

Week 9-10: Future of ChatGPT and AI

  • Future developments of ChatGPT
  • Long-term societal implications of AI
  • Reading: Futuristic articles and papers on AI
  • Summative Assessment: Final essay on the future of AI and models like ChatGPT

Assessment:

  • Formative Assessments (40%): Quizzes, group discussions and presentations, case study analysis, and project.
  • Summative Assessment (60%): Final essay on the future of AI and models like ChatGPT.

Materials:

  • OpenAI’s ChatGPT papers and blog posts
  • Technical papers on ChatGPT and related language models
  • Articles and papers on AI ethics
  • Case studies on ChatGPT applications and societal impacts
  • Futuristic articles and papers on AI

Note: This is a suggested syllabus and can be modified based on the specific needs and objectives of the course.


Embracing Technology in Higher Education: Lessons from the Past

As an IT leader and an educator with a long history in higher education, I’ve seen how technology has changed various aspects of academia. New technologies often face skepticism or concerns about how they might affect the values we hold dear. By learning from our experiences, we can better understand and address these concerns. ChatGPT, an AI language model, has recently sparked discussions about academic integrity, reminding me of a similar situation.

A few weeks ago, I read an article saying we shouldn’t automate what we care about. This idea reminded me of the resistance we encountered in 2005 when we introduced electronic medical records (EMRs) to counseling psychologists on our campus. They worried that using technology would make their profession less human and personal.

But what if we think about technology differently? Instead of seeing it as a replacement for what we care about, we can use it to make routine and supporting tasks more efficient. This way, we have more time to focus on what truly matters, like giving our students the attention they need.

The adoption of EMRs is a great example. At first, counseling psychologists were skeptical, but over time, they found that the technology made their work more efficient, giving them more time to spend with their clients. They realized that using technology didn’t take away the human element; it allowed them to focus more on what they cared about.

We can apply this same perspective to ChatGPT and academic integrity concerns. By having open conversations with educators and students, we can address their concerns and show them the potential benefits of using technology like ChatGPT to improve efficiency in education. By tackling the challenges directly and refining the technology, we can ensure it fits the core values of teaching and learning.

It’s important to remember that ChatGPT, or any technology, isn’t inherently good or bad. How we use, regulate, and adapt it to our specific needs determines its impact on academic integrity. By changing how we think about technology and learning from our past experiences, we can better prepare ourselves to face the challenges and opportunities of new technologies like ChatGPT, ultimately making education better for everyone involved.


Transforming Higher Education: How AI and Skilled Educators Can Shape the Future of Learning

I once taught first-year international students an introduction to a university course. Several students mentioned they recorded their lectures because it was a challenge for them to follow their instructors with the language barrier and the instructors speaking style. They would then review their recordings after the class. With this method, students can focus on the instructor with their heads up instead of having their heads down to take notes thereby missing some visual and oral cues from the professor when they’re emphasizing essential concepts.

An instructional designer told me that seats behind the third row in a big lecture hall may as well be considered distance learning. The students who engage by asking questions and participating in discussions are few and often seated in front of the class. During the pandemic, the same instructional designer also noted that Zoom sessions might appear more intimate for students because they can see the instructor up close, including their facial expressions, rather than a minuscule figure in front of an auditorium.

The thoughts above made me think ChatGPT and other generative AI tools could enhance learning and drive transformative changes to higher education. Here are some possibilities:

Flipped classroom. Just like the first example I provided above, AI tools that summarize materials, and introduce concepts in different modes beyond lecture format that may resonate more with the student’s learning styles, can prepare students for active discussions in the classroom. Instructors can provide asynchronous materials (videos, etc.) that students can study before class, and the instructors can then use the class sessions for interactive discussions.

Guide on the side instead of the sage on the stage. Related to the idea above, if higher education is concerned about ChatGPT being used as a cheating device or leading students to become lazy/disengaged, the role of instructors could shift from someone who lectures on stage to an active facilitator. When I attended courses at the UC Berkeley Haas Business School for my executive leadership program and a leadership academy, I was in awe of the caliber of the faculty members. They were experts in their respective fields and skilled facilitators who fostered an engaging and collaborative learning environment. Using the Socratic method, they solicited students’ ideas through dialogues instead of monologues, which was a refreshing experience for a student like me.

Personalized learning. Generative AI as a feedback mechanism can help bridge the gap for students who may be struggling with language barriers and enhance the overall learning experience for all students by addressing their unique strengths and areas for growth. AI tools like ChatGPT can enable personalized learning experiences by providing students with targeted resources and customized feedback based on their learning needs and preferences.

Enhanced accessibility. AI-powered transcription and translation services can make course content more accessible for international students, students with disabilities, or those who prefer learning through different modalities. For example, universities can provide recorded lectures transcribed and translated into other languages, and visual aids can describe visual objects for students with visual impairments in detail, similar to ChatGPT for Be My Eyes.

Virtual mentorship and coaching. Universities can use AI tools like ChatGPT to provide personalized guidance, mentorship, and coaching to students outside the classroom. Students can access on-demand support, helping them navigate academic and personal challenges and improving their overall university experience. An application of this concept is to provide students with self-service applications that can “nudge” or remind them to take certain actions based on information available in student information, learning management systems, or other university systems.

Data-driven teaching and learning. AI tools can analyze vast amounts of student performance and engagement data, allowing instructors to make real-time data-driven decisions or to have better information about their students. One example of this concept is a dashboard for faculty that includes information about their students, including their socioeconomic status, demographic backgrounds, and academic performance in their current or previous courses related to or prerequisites to their class. Along with the data are suggestions for adjusting their courses to improve their students’ performance. The use of generative AI in this way can help faculty members identify struggling students early on, provide targeted support, and continuously improve their teaching strategies and course content.

By adopting the concepts above and integrating AI tools like ChatGPT into higher education, institutions can create more engaging, inclusive, and effective learning environments that enhance student success and better prepare students for the dynamic world they will enter upon graduation. In this new paradigm, faculty members will continue to play a crucial role, not as the sole distributors of knowledge but as skilled facilitators and guides who support and empower students on their learning journeys.


Why IT Needs to Be at the Table in Campus Operations

As an IT leader in a higher education institution overseeing critical, including COVID-19-related, systems, I appreciate the importance of IT being at the table and data-informed decision-making. A particular incident during the height of the COVID-19 pandemic highlighted the importance of IT representation in the institution’s core operations.
When a monthly process required a reboot of some systems, it became a source of concern when a well-meaning and concerned staff member, a campus COVID-19 coordination team member, reported that the system would be down for two days. This misinformation caused a mild panic that could have led to months of re-work of our systems, pausing other projects, hundreds of hours from our staff, and frustrations.
Fortunately, having someone in IT explain the situation corrected the misinformation. At the request of the coordination committee, I joined their team meeting to ensure our team’s commitment to addressing their concerns and to provide our perspective. To analyze the impact of the downtime, my team and I reviewed the logs and used an application that alerts downtimes. We discovered the systems were only disrupted for five minutes, much less than the reported two-day outage. This incident demonstrated the importance of data-informed decision-making in addressing complex issues.
It was clear that there was a need for collaboration between IT and our partners to address the issue. The incident highlighted the importance of having IT representation in the institution’s core operations. Electronic medical records, testing, and reporting are complex systems that require constant monitoring and attention. I requested to my supervisor that I become a standing member of the committee moving forward, which she approved. As part of the committee, I could address any concerns in real-time and better understand the actions requested by my team.

Our organization has emphasized the importance of collaboration and data-informed decision-making. Only through working collaboratively towards a solution could we identify the root cause of the issue and take steps to prevent it from recurring. We have also explored ways to make our systems more redundant and have coordinated with the campus for communication before any potential downtime.

The incident we experienced during the COVID-19 pandemic highlighted the criticality of having IT at the table and data-informed decision-making. Mis-information can cause panic, frustration, and waste resources. By working collaboratively, we can identify and address issues promptly, ensuring that operations run smoothly and effectively.


Understanding the Needs and Wants of Your Students: A Must for Administrators

How can we effectively provide service to those we serve without spending time with them and understanding their world?

I’ve been a staff advisor for student organizations for the last two decades and learned a very important lesson a few years ago. There was a year when I assumed I could maintain my relationship with the organizations I advise and the incoming student leaders and their needs and wants because I’d been their advisor for many years before that. I was caught off guard when I sensed a shift in attitude; explaining was challenging, but I felt it. I had to ask my colleagues if they were experiencing a change in student attitudes, and they confirmed my observation. It was neither bad nor good; it was just different. Another time, students started talking about “clickers,” and I had no clue what they were talking about.

One time, I was playing Pokémon with some students, and my supervisor, the Vice Chancellor for Student Affairs, happened to see us. The students thought I would be in trouble, but it’s quite the opposite. She was glad I spent time with students and pleased I played Pokémon walking around on campus so “I can experience paths where students go that I miss otherwise.” One of the most enduring pieces of advice she gave me is that as an administrator,” one of the most important insights we have to represent our students are the one-on-one conversations we have with them.” It’s also important to note that administrators probably spend time with student leaders and those already engaged. But the students we need to reach out to feel marginalized and suffer in silence because they either don’t know the services offered or don’t trust the campus.

Some communities require trust for staff/faculty/administrators to be genuinely invited into their communities. Often, it may take me more than a couple of years, even getting the urging of their peers comfortable with me, for younger students to meet with me. When I meet with them, they regret not accepting my offer for lunch sooner after finding out that I’m not as intimidating as they had presumed. I asked them why they didn’t meet with me sooner. They tell me they are trying to figure out how to respond to staff/faculty asking them to meet for casual lunch or coffee because they haven’t experienced that offer.

There’s a saying in the student affairs world to “meet students where they are,” which means making ourselves available at their events or just perusing what’s going on in social media to get a pulse of what’s going on. I follow the Facebook parents page, TikTok, and Twitter and am also alerted to Reddit posts on an unofficial campus subreddit. Even today, I’ve heard some colleagues reticent to using social media, but as my supervisor said, “You’re missing half the conversation if you don’t engage online.”

When we make decisions based not on students’ input or lack of understanding about their current world, not what we experienced as undergrads many decades ago, even with the best intentions, they can harm our students.


Turning Pain into Purpose: Using Your Story to Empower and Connect with Others

Let me tell you the story of a person who has gone through many painful experiences throughout their life but has used these experiences to serve others. This person was mocked and made fun of as a young kid when they immigrated to the United States at the age of 11. This traumatic taunting led them to stuttering and a paralyzing fear of public speaking for half their life.

However, decades later, this person was able to overcome their fear of public speaking and stuttering. They decided to share their painful experiences and vulnerabilities with others, specifically with first-generation and marginalized students. They have had the privilege of being a keynote speaker for First-Generation Graduation, speaking at high school conferences, and Pilipino Graduation ceremonies. They have even conducted public speaking workshops for students who share their background as an immigrant.

This person’s story doesn’t end there. During their undergraduate years, they experienced painful moments as a student leader. As an inexperienced student leader, they made many mistakes and felt shunned by the community they cared about. These painful memories stayed with them for decades. But instead of letting these experiences defeat them, this person used them to provide guidance to the students they advised in the last two decades.

Recently, a group of student leaders this person advised found themselves in a difficult situation that led them to question themselves and their ability to lead. The situation even impacted their mental health, relationships, and academics. This person could relate to their struggles and offered them the perspective of someone who had faced similar challenges. They shared with the students that the moments of struggle and failure contributed to their growth and the lessons they still use today in their leadership position at their university.

This person’s story is a testament to the fact that our painful experiences and weaknesses can be used to serve others. By sharing our vulnerabilities, we can offer guidance and support to those who are going through similar struggles. This person’s journey shows that our struggles can contribute to our growth and help us become better versions of ourselves.

That person is me.


In the Line of Fire: Four Cybersecurity Tabletop Exercises for Campus Emergency Preparedness

created by Bing AI

As a member of our campus’ emergency operations center teams and cybersecurity stakeholders (hint: everyone on campus), we need to recognize the importance of being prepared for a wide range of disruptions and disasters that can affect higher education institutions. In recent years, my campus has experienced various natural and man-made incidents, including floods, fires, technology disruptions, and power outages. In addition to these challenges, the increasing prevalence of cybersecurity threats, such as ransomware, distributed denial of service (DDoS), and other forms of attacks, has heightened concerns within higher education. This is primarily due to the valuable student, research, medical, employee, and financial data that higher education systems possess. To enhance our preparedness and response capabilities, we regularly conduct tabletop exercises that focus on various scenarios.

Coming up with engaging and realistic tabletop scenarios can be a challenge at times. This is where ChatGPT, an AI language model, can provide value by suggesting scenarios and guiding questions to consider. As a principle, cybersecurity professionals and others who design and coordinate these tabletop exercises should use ChatGPT as an initial attempt or as an idea generator to stimulate creative thinking and ensure a comprehensive approach to their exercises.

Below are four tabletop exercises designed to address different aspects of campus preparedness and cybersecurity concerns. These exercises, generated by ChatGPT, include detailed scenarios and guiding questions to help participants better understand the potential risks and challenges and facilitate discussions on effective response strategies:

Exercise: Multi-Vector Cyber Attack
Exercise: Unauthorized Access to Sensitive Data
Exercise: Social Engineering Attack on Physical Security
Exercise: Supply Chain Compromise

By engaging in these tabletop exercises, campus’ emergency operations center team and cybersecurity stakeholders can proactively identify gaps in our preparedness and response plans, refine our strategies, and strengthen our ability to protect the campus community and its valuable data assets. These exercises, whether generated by ChatGPT or other means, encourage collaboration, foster communication among departments, and promote a culture of continuous improvement in our emergency response and cybersecurity efforts. Using ChatGPT as a tool to generate initial ideas for tabletop scenarios can significantly streamline the process and provide valuable insights for enhancing campus preparedness.


Applying ChatGPT in Student Affairs Education: AI-Generated Case Studies for Realistic Learning

This blog post is about the value of using case studies and Generative Artificial Intelligence (GAI), including ChatGPT, for practical and effective learning for student affairs professionals, student leaders, and programs for undergraduate and graduate programs.

Over the past few years, I have been a judge for a national virtual student affairs-focused case study competition. I recently completed this year’s competition, which inspired me to explore new ways of generating case studies for student affairs professionals. Additionally, I have been a mentor for the NASPA Undergraduate Fellowship Program (NUFP) for the past eight years, working with undergraduate students from underrepresented communities who are interested in pursuing student affairs as a career. As a staff advisor for student organizations for the last two decades, I have advised student leaders in navigating challenging scenarios; these scenarios are expected as they consistently appear every year, but some scenarios are unique for that particular year.

These experiences, coupled with my interest in the potential of ChatGPT in higher education, led me to consider how Generative AI, like ChatGPT, can be used to create student affairs-focused case studies for undergraduate fellows, graduate programs, student leaders, advisors, and student affairs professionals.

The Value of a Well-Designed Case Study

A well-designed case study is an invaluable learning tool, especially in student affairs. It provides students and professionals with a realistic scenario that allows them to apply their knowledge, skills, and critical thinking abilities to real-life situations. This kind of experiential learning fosters a deeper understanding of theories and concepts, encourages reflection, and promotes the development of problem-solving skills.

Using ChatGPT to Generate Student Affairs Case Studies

ChatGPT, developed by OpenAI, is a powerful language model demonstrating its capacity to generate human-like text based on given inputs. By leveraging the capabilities of ChatGPT, we can generate a wide range of student affairs-focused case studies tailored to specific learning objectives and outcomes. These case studies can be used in various educational settings, such as classrooms, workshops, and professional development programs.

Below are some ways ChatGPT-generated case studies can benefit student affairs professionals and students:

  1. Customization: ChatGPT allows educators and professionals to customize case studies based on their needs, goals, and contexts. This means that case studies can be tailored to address the unique challenges and situations most relevant to a particular group of learners or professionals.
  2.  Variety: With ChatGPT, we can generate various case studies to expose learners to various issues and scenarios. This variety provides learners with new scenarios instead of those presented with the same problems and contexts that may need to be updated or relevant today.
  3.  Time and Cost Efficiency: Creating case studies can be time-consuming and costly. ChatGPT streamlines the process by generating case studies quickly and efficiently, allowing educators and professionals to focus on other essential aspects of their work. It is important to note that the generated case studies may need to be modified for accuracy and more details.
  4.  Adaptability: As the field of student affairs continues to evolve, ChatGPT-generated case studies can be easily adapted and updated to reflect new trends, challenges, and best practices, ensuring that learners are constantly engaged with the most current and relevant material.

Here are examples of case studies generated by ChatGPT:

Case study 1: Balancing Academic and Extracurricular Involvement
Case study 2: Supporting Students’ Mental Health
Case Study 3: Enhancing Diversity and Inclusion in Student Organizations
Case Study 4: Addressing Student Food Insecurity
Case Study 5: Building a Comprehensive Campus Wellness Program

Conclusion

As the needs of our students and the nature of our work as student affairs professionals evolve, we must continue to explore and embrace new technologies and methodologies that enhance our ability to educate and prepare the next generation of student affairs professionals. The integration of ChatGPT in generating student affairs-focused case studies holds the potential to enrich the learning experiences of students and professionals alike.


The Many Applications of Generative AI, Beyond Student Learning and Teaching

Generative AI, including ChatGPT, is a technology that can potentially transform higher education across all areas of campus operations. Using the Higher Education Reference Model, which outlines the core capabilities of learning and teaching, research, and enabling capabilities, this document, Higher Education Reference Model (HERM) Capability Model & Generative Artificial Intelligence provides ideas on how Generative AI can be applied in higher education.

While generative AI has already proven its potential to revolutionize/disrupt the student learning experience, it can streamline admissions, automate administrative tasks, and support facilities management and IT support. Additionally, AI can help researchers make sense of large amounts of data, identifying patterns and trends and generating new hypotheses.

Ensuring ethical and transparent use of AI is vital. As professionals in higher education, we must continue to explore the various applications of generative AI and its potential to transform higher education beyond the classroom.

Source: https://library.educause.edu/resources/2021/9/the-higher-education-reference-models


The Future of Education: Expert Panel Weighs in on the Metaverse, AI, and Blockchain in Higher Learning

Through the power of ChatGPT, we summon our facilitator, Percy, once again to moderate a round-table conversation amongst thought leaders in the field of higher education and technology. Previously, Percy moderated the topic around The Future of Higher Education: A Roundtable of Higher Education Leaders: Discussing the Future of Higher Education – ChatGPT Style.

The topic for today is “imagine a future where a higher education institutions operate in a time and space in the metaverse, artificial intelligence, and blockchain.” The first question is “How imminent will this happen and what are the implications to higher education?” The second question is “what is the worst case scenario?” and the last question is “what’s the best case scenario?”

Percy: Thank you all for joining this round table discussion. Today, we’ll be exploring a future where higher education institutions operate in the metaverse, leveraging artificial intelligence and blockchain technologies. Our first question is, “How imminent will this happen and what are the implications for higher education?” Let’s hear from Ray Kurzweil first.

Ray Kurzweil: In my view, the integration of these technologies into higher education is not only imminent, but it’s already happening. Within the next decade, we’ll see a significant shift towards virtual, AI-assisted learning environments. The implications include increased accessibility, personalization, and collaboration in education, but also the risk of exacerbating existing inequalities.

Percy: Thank you, Ray. Don Tapscott, would you like to weigh in on this question?

Don Tapscott: I agree with Ray on the timeline. However, I’d like to emphasize the potential of blockchain technology in this context. It can revolutionize higher education by providing secure, decentralized systems for credentials and academic records. This will create more trust and transparency in the sector, ultimately benefiting students and institutions alike.

Percy: Cathy N. Davidson, your perspective would be invaluable here.

Cathy N. Davidson: Building on Ray’s and Don’s points, I see this transformation as an opportunity to rethink our traditional educational models. By embracing the metaverse and AI, we can break free from the constraints of time, space, and bureaucracy. This has the potential to create more inclusive, engaging, and learner-centric experiences, but it also demands a shift in how we prepare educators for this new reality.

Percy: Now, let’s hear from Joi Ito on this topic.

Joi Ito: I agree that this future is imminent, and I’d like to highlight the implications for creativity and innovation. AI and the metaverse will enable new forms of collaboration, fostering a global community of learners and researchers. However, we must be cautious not to let technology dictate the educational experience, but rather use it as a tool to support and enhance human connection and discovery.

Percy: Finally, Cathy O’Neil, what are your thoughts?

Cathy O’Neil: I think my colleagues have covered the potential benefits well, but I’d like to emphasize the potential risks. As AI and blockchain become more prevalent, we need to ensure that privacy, security, and ethical considerations are addressed. Furthermore, we must work to prevent a digital divide in education, where access to these technologies becomes a new source of inequality.

Percy: Let’s dive into the second question: “What is the worst-case scenario for a future where higher education institutions operate in a time and space in the metaverse, artificial intelligence, and blockchain?” I’d like to invite Don Tapscott to share his thoughts first. Don, as a leading expert on the intersection of technology and education, what concerns do you have about this future?

Don Tapscott: The worst-case scenario in my view would involve the exacerbation of existing inequalities in higher education. Institutions that can afford cutting-edge technologies will gain a massive advantage, while others will be left behind. Additionally, there’s a risk of personal data being mishandled on these platforms, putting student privacy at stake.

Percy: Cathy N. Davidson, as a scholar who has focused on the future of learning and digital literacy, how do you see the potential pitfalls in this scenario?

Cathy N. Davidson: I share Don’s concerns about inequality. We need to ensure that access to quality education in the metaverse is not restricted to a privileged few. Moreover, the overemphasis on technology may lead to a devaluation of human interaction, which is essential for critical thinking and creativity.

Percy: Joi Ito, as an entrepreneur and technologist who has been involved with the development of the internet and its impact on society, do you have any thoughts on this worst-case scenario?

Joi Ito: I think there’s a danger of monopolistic behavior by large corporations that could stifle innovation and limit access to these new educational opportunities. Also, we must be cautious of surveillance capitalism, where student data is used to manipulate and control, instead of empowering learners.

Percy: Cathy O’Neil, as a mathematician and author who has written about the dark side of algorithms, what do you think could go wrong in this future?

Cathy O’Neil: The use of AI in higher education could lead to biased decision-making and further marginalize underrepresented groups. We need to make sure algorithms are transparent and accountable to prevent this.

Percy: Ray Kurzweil, as a futurist and inventor who has extensively written about AI and its future impact, what are your thoughts on the potential negative consequences?

Ray Kurzweil: While I’m generally optimistic about technology, I recognize the potential for AI to become an oppressive force if misused. In the worst-case scenario, AI could be used to control students’ thoughts and actions, leading to a dystopian educational environment.

Percy: Thank you all for your insights. It seems that the main concerns in the worst-case scenario involve inequality, surveillance, loss of human interaction, and the misuse of AI. Let’s continue the conversation and explore potential solutions and best-case scenarios.

Percy: That’s a compelling vision, Don. Cathy N. Davidson, you have long advocated for the importance of rethinking traditional education structures. What do you think about the best-case scenario?

Cathy N. Davidson: Building on Don’s ideas, I believe the best-case scenario would also involve a shift from the traditional one-size-fits-all approach to a more diverse and flexible education system. The metaverse and AI can help create an environment where students can explore their passions, and collaborate with people from different backgrounds and disciplines. Blockchain could facilitate a more modular and customizable credentialing system, allowing students to build their own unique learning paths. It’s a future where education becomes a lifelong, adaptable journey.

Percy: Thank you, Cathy. Joi Ito, as someone with a background in both technology and education, what are your thoughts on the best-case scenario for higher education in this new era?

Joi Ito: I agree with Don and Cathy, but I’d like to emphasize the importance of maintaining a strong human component in this new educational landscape. While AI and the metaverse can empower students and educators, we must ensure that these technologies are designed ethically and with empathy. In the best-case scenario, we strike a balance between harnessing technology and fostering human connections. It’s crucial that we create spaces within the metaverse where diverse perspectives can come together, fostering creativity and innovation.

Percy: That’s an important point, Joi. Cathy O’Neil, you have been critical of the potential pitfalls of AI and big data. In your view, what would be the best-case scenario for higher education institutions utilizing these technologies?

Cathy O’Neil: I believe the best-case scenario would be one where AI and blockchain are used responsibly and transparently. Education institutions should ensure that the algorithms they use are fair, unbiased, and do not perpetuate existing inequalities. In addition, they must be transparent about the data collected and used to make decisions about students’ learning experiences. By being accountable and transparent, we can harness these technologies to create an equitable and just educational system for all.

Percy: Thank you all for sharing your perspectives on the best-case scenario for higher education institutions in the age of the metaverse, AI, and blockchain. It’s clear that while there are many potential benefits, it’s crucial to ensure that these technologies are used ethically and responsibly to create a more inclusive, accessible, and innovative education system.


Bing-AI Aided NCAA Men’s Basketball Bracket Picks

I would like to know how predictions aided by Bing AI would perform the winners of the 2023 NCAA Men’s Basketball Tournament. I have not typically participated in completing my predictions, so this is an unscientific exercise from a novice who is curious about how a Bing AI-aided picks would fare. Here are the picks. The Final Four teams include Alabama, Tennessee, Penn State, and Gonzaga. Alabama an Gonzaga play for the championship with Alabama as the national champion.


The Heart of Legacy: Creating a Lasting Impact Through People-Centric Leadership

A few weeks ago, during our stay interview at work, I was asked, “What legacy would you like to leave at UCSB?” This question resonated with me and prompted me to think about what I want my impact to be. I invite you to reflect on this question: What legacy do you envision leaving behind in your organization?

At the same time, I’m working with a team to implement a mentorship program for our IT professionals. One goal is to provide opportunities for future leaders. The question above and this mentorship program got me thinking about the most significant impact we can have in our organizations – the opportunities and interactions we share with the folks we work with and serve. How we treat them and set them up for success can affect their lives, even generations to come, and all the people they will interact with. As a wise colleague recently told me, at the end of the day, our work begins and ends with people. People will forget the systems we build and the projects we work on, but our impact on others lasts much longer. Sometimes, amid our busy work lives, we focus on what’s right before us and only think about the future within our time in our organizations. A wise student once told me, “As leaders, we are hard on ourselves, thinking our efforts don’t yield any meaningful results, but we are planting seeds today which will bloom beyond our time.” What seeds are you planting today that will contribute to the legacy you want to leave behind?

When asked about the legacy I’d like to leave, my immediate response was “Acts 13:36,” which I’ve come to translate as “he served God’s purpose in his generation.” This verse and my wife’s and I’s shared mission statement: “To make positive impacts globally through acts of compassion, inspiration, and education. In pursuit of this goal, practice an attitude of gratitude and an abundance mindset,” provides direction for our purpose in life. How do your values and beliefs shape the legacy you want to create?

As leaders and colleagues, we have a moral and ethical responsibility to ensure that those we work with and those we serve are treated with dignity and respect and given equal opportunities to succeed – not despite their differences but because of their uniqueness to our organization.

Through my experiences growing up, feeling marginalized in school and even at work when I didn’t fit the norms, I’ve come to value equal access to opportunities, inclusion, and appreciation for diversity. What values do you believe can contribute to a positive legacy? How can you invest in the people around you to help them reach their full potential?

With this in mind, our mentorship program aims to empower IT professionals and future leaders by focusing on their growth and development. By investing in people and fostering a sense of community, we can create a lasting legacy that carries on through those we’ve mentored and the people they’ll go on to mentor themselves.

Let us remember that our work and the legacy we leave in our organizations begin and end with people. People will forget the systems we build and our contributions to projects. However, our impact on others can resonate for generations to come. By focusing on developing meaningful relationships, mentoring future leaders, and living out our purpose with an attitude of gratitude and an abundance mindset, we can make an impactful difference in those around us and leave a legacy that will long outlive the systems we create. So, what legacy do you envision leaving behind?


ChatGPT 101: An Introduction to the Revolutionary AI Technology for Higher Education IT Professionals

The content of this training is generated by ChatGPT in response to a prompt provided by Joe Sabado.

Learning Outcomes:

By the end of this 55-minute training, participants will be able to:

  • Understand the concept of ChatGPT and its relevance to higher education IT professionals.
  • Identify potential use cases for ChatGPT in their respective areas of IT operations and management.
  • Explain the benefits of using ChatGPT for internal and external communication, knowledge management, automation, and leadership/management.
  • Evaluate the potential risks and challenges associated with ChatGPT in cybersecurity and HR.
  • Develop strategies for responding to cybersecurity, HR, and other IT-related concerns involving ChatGPT.

Outline:

I. Introduction (5 minutes)

  • Welcome and introductions
  • Overview of the training objectives and agenda
  • Explanation of ChatGPT and its relevance to higher education IT professionals

II. Understanding ChatGPT (10 minutes)

  • Explanation of the concept of ChatGPT
  • Overview of how ChatGPT works
  • Potential use cases for ChatGPT in higher education IT operations and management

III. Benefits of ChatGPT (15 minutes)

  • Improved internal and external communication
  • Streamlined knowledge management
  • Increased automation
  • Enhanced leadership and management

IV. Risks and Challenges of ChatGPT (10 minutes)

  • Potential cybersecurity risks
  • HR concerns
  • Other IT-related challenges

V. Case Study Activities (10 minutes)

  • Small group discussions on potential use cases for ChatGPT
  • Group presentations and feedback

VI. Responding to Concerns (5 minutes)

  • Strategies for addressing cybersecurity risks
  • Approaches to HR concerns
  • Other IT-related responses

VII. Q&A and Next Steps (5 minutes)

  • Recap of key takeaways
  • Questions and answers
  • Next steps for implementing ChatGPT in higher education IT operations and management

Case Study Activities:

Case Study 1: ChatGPT for Internal Communication (3 minutes)

  • Small group discussion on how ChatGPT can improve internal communication in a higher education IT department (5 minutes)
  • Group presentation and feedback (2 minutes)

Case Study 2: ChatGPT for External Communication (3 minutes)

  • Small group discussion on how ChatGPT can improve external communication with students, faculty, and other stakeholders in higher education (5 minutes)
  • Group presentation and feedback (2 minutes)

Case Study 3: ChatGPT for Knowledge Management (3 minutes)

  • Small group discussion on how ChatGPT can be used to manage institutional knowledge in higher education (5 minutes)
  • Group presentation and feedback (2 minutes)

Responding to Concerns:

  • Cybersecurity risks: Ensure proper authentication and access control measures, regularly monitor ChatGPT for suspicious activity, and implement regular security audits. (1 minute)
  • HR concerns: Develop guidelines for appropriate use of ChatGPT, including the avoidance of sensitive or discriminatory language, and provide training on best practices for ChatGPT usage. (2 minutes)
  • Other IT-related concerns: Develop a comprehensive risk management plan that addresses potential challenges and risks associated with ChatGPT, including data privacy and ethical considerations. (2 minutes)

Time Allocation:

  • Introduction – 5 minutes
  • Understanding ChatGPT – 10 minutes
  • Benefits of ChatGPT – 15 minutes
  • Risks and Challenges of ChatGPT – 10 minutes
  • Case Study Activities – 10 minutes
  • Responding to Concerns – 5 minutes

Q&A and Next Steps – 5 minutes

========================================================

Email me at joepsabado@gmail.com for a facilitator’s guide for this training which includes scenarios for the case study section and details for each section.


ChatGPT 101 for Student Health Services: An Introductory Training

The content of this training is generated by ChatGPT in response to a prompt provided by Joe Sabado.

Training Purpose: To provide administrative and medical staff at the student health center with an introduction to ChatGPT, a large language model trained by OpenAI, and how it can be used to support student health service-related concerns. The training will also address privacy, appropriate use, and other FERPA/HIPAA concerns related to ChatGPT usage, and offer approaches to responding to student health service-related concerns involving ChatGPT while ensuring confidentiality, privacy, and appropriate use of the tool.

Learning Outcomes:

  • Understand what ChatGPT is and how it can be used to support student health service-related concerns.
  • Identify potential privacy, appropriate use, and other FERPA/HIPAA concerns related to ChatGPT.
  • Develop approaches to responding to student health service-related concerns involving ChatGPT while ensuring confidentiality, privacy, and appropriate use of the tool.

Introduction (5 minutes):

  • Greet the participants and introduce yourself as the facilitator.
  • Explain the purpose of the training, which is to introduce them to ChatGPT and provide an overview of the training agenda.

ChatGPT Overview (10 minutes):

  • Explain what ChatGPT is and how it works.
  • Discuss the potential benefits of using ChatGPT for student health service-related concerns.
  • Provide examples of how other organizations have successfully integrated ChatGPT into their operations.

Case Studies (30 minutes):

  • Divide participants into groups and distribute case studies related to ChatGPT usage in student health service.
  • Each group will analyze the case study and identify potential privacy, appropriate use, and other FERPA/HIPAA concerns.
  • After analyzing the case study, each group will report its findings to the larger group and engage in a discussion about the best ways to address those concerns.

Approaches to Responding to Concerns (10 minutes):

  • Provide some approaches to responding to student health service-related concerns involving ChatGPT.
  • Highlight the importance of confidentiality, privacy, and appropriate use of the tool.
  • Encourage participants to share their experiences and suggestions for dealing with these types of concerns.

Wrap-up (5 minutes):

  • Summarize the key takeaways from the training.
  • Encourage participants to ask questions or provide feedback.
  • Provide any additional resources, if available.

=============================================================

Email me at joepsabado@gmail.com for a facilitator’s guide for this training which includes scenarios for the case study section and details for each section.


Student Conduct Officers: Are You Prepared to Deal with ChatGPT?

I have yet to read many articles on how higher education student conduct officers should respond to ChatGPT-related academic and behavioral misconduct incidents. So, to get some general ideas on the topic, I prompted ChatGPT with ideas on what a 1-hour training to introduce ChatGPT, the relevance to their jobs, and potential approaches they may have when such incidents arise.

This blog post aims to:

  1. Share a 1-hour training session plan that can be used as a baseline for others to adapt.
  2. Illustrate the utility of ChatGPT as a learning tool.
  3. Use ChatGPT to develop training activities.

One important note: The value of a knowledgeable instructor/practical facilitator who can lead inclusive, diverse, and productive discussions cannot be understated. For this interactive training, the approach of “guide by the side,” whereby the teacher acts as a facilitator or guide for the learning process, is more effective than “sage on the stage,” whereby the teacher lectures, and the students listen and take notes.

Learning Outcomes/Objectives:

  • Define what ChatGPT is and how it works
  • Understand the potential impact of ChatGPT on academic and behavioral misconducts
  • Analyze case studies involving ChatGPT and academic or behavioral misconducts, and recommend appropriate responses
  • Develop a plan of action to respond to academic and behavioral misconducts involving ChatGPT

By the end of this training, student conduct officers should be able to:

  • Define what ChatGPT is and explain how it works, and recognize its potential applications in various fields
  • Understand the potential impact of ChatGPT on academic and behavioral misconducts, and recognize its relevance to student conduct officers’ work
  • Analyze case studies involving ChatGPT and academic or behavioral misconducts, and recommend appropriate responses, taking into account ethical considerations and best practices
  • Develop a plan of action to respond to academic and behavioral misconducts involving ChatGPT, using guidelines and best practices, and involving AI experts and stakeholders as necessary

Time Allocation:

I. Introduction (5 mins)

  • Welcome and introductions
  • Brief explanation of the purpose of the training session
  • Overview of what ChatGPT is and how it works

II. Importance and Relevance (10 mins)

  • Discuss why it is important for student conduct officers to be aware of ChatGPT
  • Explain how ChatGPT can be relevant to academic and behavioral misconducts
  • Provide examples of how ChatGPT can impact academic and behavioral misconducts

III. Case Study Activities (30 mins)

  • Divide the group into small groups
  • Provide each group with a case study activity involving ChatGPT and academic or behavioral misconduct
  • Allow each group time to analyze the case study and discuss potential responses to the situation
  • Reconvene the group and allow each group to present their analysis and recommended response

IV. Responding to Misconduct (10 mins)

  • Discuss potential approaches to responding to academic and behavioral misconducts involving ChatGPT
  • Provide guidelines for investigating and addressing potential misconduct involving ChatGPT
  • Discuss the importance of involving AI experts and developing policies to prevent future misconduct

V. Conclusion and Learning Outcomes (5 mins)

  • Recap the main points covered in the training
  • Reiterate the importance of being aware of ChatGPT and its potential impact on academic and behavioral misconducts
  • Provide learning outcomes for the training

==========================================================

It is imperative that student service practitioners, including student conduct officers, engage in learning about ChatGPT to gain competencies to better serve and prevent harm to students and other campus constituents. Ill-informed policies and practices implemented do have social justice implications.

While a significant portion of the conversations related to ChatGPT center around academic integrity and teaching/learning, higher education institutions consist of many areas/capabilities in the learning and teaching, research, and enabling capabilities that warrant explorations on the implications and utilities of this technology. 

Email me at joepsabado@gmail.com for a facilitator’s guide for this training which includes scenarios for the case study section and details for each section.


Student Success Support Model

What are your thoughts on what makes an effective student success support system that is suitable for the current and future needs of college students?

Can you share your ideas and/or provide feedback on what is missing from the proposed approach I have provided below? This model aims to meet student success (academic, career readiness, preparation for life (citizen), and well-being. Thank you.

Version 2 based on suggestion – Emphasize/separate direct face-to-face connection with staff.

Version 2.
Version 1.


Measuring the Success of Generative AI Adoption in Higher Education

In the rapidly evolving landscape of higher education, Generative AI (Gen AI) is emerging as a transformative technology with the potential to revolutionize teaching, learning, and institutional operations. However, as with any significant technological adoption, implementing Gen AI in colleges and universities comes with challenges, resource requirements, and potential unintended consequences.

As higher education institutions consider embracing Gen AI, it is crucial to carefully assess its value and impact. By understanding the costs, benefits, and risks associated with Gen AI, decision-makers can make informed choices that align with their institution’s mission, goals, and resources.

As I develop a proposed Gen AI capability maturity model and help lead our university’s AI efforts, a critical question on my mind is, “How do we measure the success of our Gen AI adoption and utilization in higher education?” This question is essential for understanding the value and impact of Gen AI investments and ensuring that these technologies effectively support institutional goals and student success.

Key Questions to Consider:

When evaluating the value of Gen AI in higher education, there are several key questions to consider:

  1. What specific problems or opportunities can Gen AI address in our institution?
  2.  How will adopting Gen AI contribute to our educational mission and student success?
  3.  What resources (financial, technological, human) are required to implement and maintain Gen AI systems?
  4.  What are the potential opportunity costs of investing in Gen AI compared to other institutional priorities?
  5.  How can we measure the effectiveness and impact of Gen AI on student learning outcomes and institutional performance?
  6.  What are the possible unintended negative consequences of Gen AI adoption, such as ethical concerns, privacy issues, or widening digital divides?
  7.  How can we mitigate the risks and challenges of Gen AI while maximizing its benefits?
  8.  How do we measure success in implementing Gen AI in higher education?

Measuring Success: Outputs and Outcomes

Measuring success is critical to assessing Gen AI’s value in higher education. One effective approach is to adopt the concept of outputs and outcomes. When seeking a potential framework for assessing success, one concept that comes to mind is ProSci’s definition of success, which includes project objectives and organizational benefits.

Project objectives focus on the project’s aims, how it will contribute to solving a problem or realizing an opportunity, what it will produce or enable, and how we will know when the objectives have been achieved. These objectives are closely tied to the outputs, which are the direct products or deliverables resulting from the implementation of Gen AI, such as AI-powered chatbots, personalized learning systems, or AI-assisted research tools.

On the other hand, organizational benefits represent what the institution gains from the project. They address the problem or opportunity that the project aims to solve or realize for the organization and define the benefits that will be achieved if the problem is solved or the opportunity is realized. These benefits are often linked to the outcomes: the broader impact and positive changes that the outputs bring to the institution, students, and stakeholders. Outcomes can be measured by establishing key performance indicators (KPIs) and regularly monitoring progress.

Examples of Outputs and Outcomes:

Here are some specific examples of how project objectives and organizational benefits relate to outputs and outcomes in the context of Gen AI and higher education:

Project Objectives and Outputs:

  • Objective: Develop an AI-powered chatbot to assist students with course-related queries and administrative tasks.
  •  Output: A fully functional chatbot integrated into the university’s website and learning management system.
  •  Objective: Implement an AI-driven personalized learning system that adapts to individual student needs.
  •  Output: An intelligent tutoring system that provides customized learning paths and resources based on student performance and preferences.
  •  Objective: Create an AI-assisted research tool to help faculty and students identify relevant literature and generate insights from large datasets.
  •  Output: An AI-powered research platform that integrates with academic databases and provides advanced search, analysis, and visualization capabilities.

Organizational Benefits and Outcomes:

  • Benefit: Improved student support and engagement
  •  Outcome: The reduced workload for faculty and administrative staff in addressing routine student queries led to increased student satisfaction and retention.
  •  Benefit: Personalized learning experiences
  •  Outcome: Improved student learning outcomes, higher course completion rates, and increased student motivation and engagement in their studies.
  •  Benefit: Enhanced research productivity and impact
  •  Outcome: Research processes were more efficient and effective, leading to higher-quality publications, grant success rates, and institutional reputation.

Measuring Achievement:

  • Chatbot objectives can be measured by the number of student queries successfully handled, reduced response time, and student satisfaction scores.
  •  Personalized learning system objectives can be evaluated through improvements in student grades, course completion rates, and student feedback on the adaptive learning experience’s effectiveness.
  •  AI-assisted research tool objectives can be assessed by the number of users, the volume and relevance of literature identified, the insights generated from data analysis, and user satisfaction with the platform.

Realizing Benefits:

  • Improved student support can be observed through surveys measuring student satisfaction, reduced dropout rates, and increased student engagement metrics.
  •  Personalized learning benefits can be realized through data analysis showing improved student performance, increased course completion rates, and higher student retention and graduation rates.
  •  Enhanced research productivity can be demonstrated by tracking the number and quality of publications, grant success rates, and the institution’s research ranking and reputation.

Steps to Measure Success:

To effectively measure the success of Gen AI initiatives, institutions should follow these steps:

  1. Define clear objectives and desired outcomes for each Gen AI project or initiative.
  2.  Identify specific outputs that contribute to achieving those outcomes.
  3.  Establish metrics and key performance indicators (KPIs) to track progress and measure the effectiveness of outputs and outcomes.
  4.  Regularly monitor and evaluate the performance of Gen AI systems against these metrics and KPIs.
  5.  Use data-driven insights to make informed decisions about Gen AI initiatives’ continuation, modification, or termination.

Tracking Progress with OKRs:

One practical framework for tracking progress in pursuing the intended outputs and outcomes is Objectives and Key Results (OKRs). OKRs are goal-setting and management tools that help organizations define and track objectives and outcomes. The framework consists of two main components:

  • Objectives: High-level, qualitative goals that are ambitious and align with the organization’s mission and strategy.
  •  Key Results: Specific, quantifiable measures demonstrating progress towards achieving the objectives. Key results should be measurable, time-bound, and realistically achievable.

By adopting the OKR framework, higher education institutions can effectively monitor the progress of their Gen AI initiatives and ensure that they are on track to deliver the desired outputs and outcomes. The benefits of using OKRs include clarity and alignment, focus and prioritization, measurability and accountability, and agility and adaptability.

Continuous Improvement:

Measuring success involves evaluating past performance and driving continuous improvement. By regularly assessing the impact of their Gen AI initiatives, institutions can identify areas for enhancement, refine their strategies, and explore new opportunities for innovation. This iterative approach ensures that Gen AI remains a dynamic and evolving tool that continues to deliver value to students, faculty, and the institution.

Social Justice and Ethical Considerations:

As we pursue the successful adoption and utilization of Gen AI in higher education, it is crucial to consider social justice and ensure that our efforts are rooted in AI’s ethical and responsible use. We must always remember that AI is a human-centered technology designed to serve the needs and well-being of students, faculty, and the broader community. This means:

  1. Ensuring equal access to Gen AI tools and resources, regardless of socioeconomic status, race, gender, or other demographic factors.
  2.  Addressing potential biases in AI algorithms and data sets to prevent the perpetuation of systemic inequalities.
  3.  Protecting student privacy and data security, and being transparent about how data is collected, used, and stored.
  4.  Fostering a culture of responsible AI use, where the limitations and potential risks of the technology are openly discussed and mitigated.
  5.  Engaging in ongoing dialogue with students, faculty, and other stakeholders to understand their needs, concerns, and perspectives on using Gen AI in education.

In human-centered AI, the POST framework, which stands for People, Objective, Strategy, and Technology, must be considered. Introduced by Charlene Li in the book Groundswell, this framework emphasizes the importance of putting people first when adopting new technologies, ensuring that the objectives align with the users’ needs and goals, developing strategies that support the effective implementation and use of the technology, and finally, selecting the appropriate technology to meet these objectives.

By applying the POST framework to adopting Gen AI in higher education, institutions can ensure that their AI initiatives are human-centered and designed to benefit students, faculty, and staff. This means:

  • People: Understanding the needs, preferences, and concerns of the various stakeholders involved in the educational process and designing AI systems that meet their requirements.
  • Objective: Aligning the goals of AI adoption with the overall mission and objectives of the institution, ensuring that the technology enhances teaching, learning, and student success.
  • Strategy: Developing comprehensive plans for the implementation, integration, and governance of AI systems, including training and support for users, data management, and ethical guidelines.
  •  Technology: Selecting the most appropriate AI tools and platforms based on the identified objectives and strategies and ensuring their compatibility with existing systems and infrastructure.

By prioritizing social justice and ethical considerations in our Gen AI initiatives, we can ensure that the technology benefits all members of our educational community and contributes to a more equitable and inclusive future for higher education.


Beyond the Code: Shaping the Future with AI-Integrated Curricula

The inspiration for this blog post was sparked by engaging discussions in the last couple of months as we advance our campus’ AI Community of Practice (COP) initiative. This week, I talked with Professor Matt Beane about the intricacies of working and learning alongside intelligent machines. I also spoke with another professor about the critical importance of embedded ethics within our curriculum. Finally, the questions I raise below are also inspired by conversations with other attendees and listening to the University of California AI Congress panelists, which is currently in progress. These interactions have led me to contemplate two questions at the forefront of our journey into the future of higher education: 

  • How can we equip our students to navigate a life enriched by artificial intelligence in their living, working, and learning environments? 
  • In this rapidly evolving landscape, how do we redefine the pedagogical frameworks of higher education to empower our students as discerning consumers and transformative agents of their futures?
  • How could a major or discipline that promotes competencies prepare our students to live, work, and learn alongside intelligent machines? 

I asked ChatGPT what major could look like. The prompts I use, which resulted in the response below, could be more sophisticated. Still, they will introduce a starting point to answer the questions above.

Technology and Human Society Major

The Technology and Human Society major is an innovative, interdisciplinary program designed to prepare students for the challenges and opportunities of integrating artificial intelligence, robotics, and digital technologies into our daily lives. This four-year curriculum bridges the gap between technology and the liberal arts, focusing on developing a broad set of skills, including technical proficiency, ethical reasoning, and a deep understanding of the societal impact of technological innovation.

Year 1 lays the foundational knowledge in computer science, artificial intelligence, psychology, and ethics, emphasizing the role of technology in society and the ethical considerations accompanying technological advancement.

Year 1: Foundations

Fall Semester

  • Introduction to Artificial Intelligence and Robotics
    • Objective: Introduce students to AI and robotics fundamentals, including history, key technologies, and applications.
    • Content: Overview of AI (machine learning, neural networks), robotics (types, uses in industry), and the societal impacts of these technologies.
    • Activities: Lectures, essential programming assignments, and group discussions on AI’s ethical implications.
  • Introduction to Computer Science
    • Objective: Provide a foundation in computer science principles and programming.
    • Content: Basics of programming (using languages like Python or Java), data structures, algorithms, and software development processes.
    • Activities: Coding exercises, project work to develop simple applications, quizzes.
  • Introduction to Psychology
    • Objective: Offer insights into human behavior, cognition, and how this knowledge applies to technology design and interaction.
    • Content: Cognitive processes, learning theories, motivation, emotion, perception, and human factors in design.
    • Activities: Case studies, experiments, and written assignments on psychology’s role in technology.
  • General Education Requirement (Mathematics)
    • Objective: Strengthen mathematical skills foundational to technical disciplines.
    • Content: Algebra, trigonometry, basic statistics, relevance of mathematical principles in technology and science.
    • Activities: Problem sets, exams, group projects applying mathematical concepts to real-world problems.

Spring Semester

  • Introduction to Ethics in Technology
    • Objective: Explore the ethical challenges in technology development and deployment.
    • Content: Privacy, security, data ethics, AI biases, and ethical frameworks.
    • Activities: Debates, ethical dilemma case studies, reflective essays.
  • Principles of Sociology
    • Objective: Understand the impact of technology on society and vice versa.
    • Content: Social structures, cultural norms, technology’s role in societal change, digital divide issues.
    • Activities: Research papers, group presentations, and discussions on technology’s societal impacts.
  • Calculus for Engineers
    • Objective: Equip students with calculus tools applicable to engineering and technology fields.
    • Content: Limits, differentiation, integration, applications of calculus in problem-solving.
    • Activities: Problem-solving sessions, quizzes, and application projects.
  • General Education Requirement (Writing)
    • Objective: Enhance written communication skills, which are crucial for all professional fields.
    • Content: Academic writing, research paper construction, argumentative essays, technical writing basics.
    • Activities: Writing assignments, peer reviews, workshops on research and citation.

This first year sets a strong foundation across various disciplines, emphasizing the integration of technical skills with an understanding of ethical, psychological, and societal aspects. This holistic approach prepares students to navigate the complexities of technology’s role in society.

Year 2 expands on this foundation with courses in human-computer interaction, digital humanities, environmental science, and data science, encouraging students to explore the interdisciplinary nature of technology’s relationship with human culture and environmental sustainability.

Year 2: Interdisciplinary Exploration

Fall Semester

  • Human-Computer Interaction (HCI)
    • Objective: Explore the design, evaluation, and implementation of interactive computing systems for human use.
    • Content: Principles of HCI, user-centered design, usability testing, and the impact of HCI in developing compelling user interfaces.
    • Activities: Design projects, usability studies, and critiques of existing systems.
  • Environmental Science and Sustainable Technology
    • Objective: Investigate the role of technology in addressing environmental challenges and promoting sustainability.
    • Content: Fundamentals of environmental science, renewable energy technologies, sustainable design principles, and case studies on technological solutions to ecological problems.
    • Activities: Research papers on sustainable technologies group projects designing sustainable solutions.
  • Digital Humanities
    • Objective: Introduce students to the application of digital technologies in humanities research and scholarship.
    • Content: Digital tools and methods in humanities research, digital archiving, text analysis, and the impact of digital technology on cultural artifacts.
    • Activities: Digital project assignments, workshops on digital tools, analysis of digital humanities projects.
  • Elective (e.g., Foreign Language, Creative Arts)
    • Objective: Allow students to explore interests outside their major and enhance their soft skills or global competencies.
    • Content and Activities: Depending on the elective chosen, students might engage in language learning, artistic creation, or other creative pursuits, emphasizing the importance of diverse skills in a technologically driven world.

Spring Semester

  • Data Science Fundamentals
    • Objective: Offer an introduction to the core concepts of data science and its applications.
    • Content: Basic statistics, data analysis, machine learning algorithms, and data visualization techniques.
    • Activities: Data analysis projects using real-world datasets and hands-on exercises with data science tools.
  • Technology Policy and Governance
    • Objective: Examine the complex relationship between technology innovation, policy formulation, and governance mechanisms.
    • Content: Overview of technology law, privacy issues, intellectual property rights, and governance models for emerging technologies.
    • Activities: Policy analysis papers, guest lectures from technology law and policy experts.
  • Introduction to Robotics
    • Objective: Provide foundational knowledge on robotic systems’ design, operation, and application.
    • Content: Basics of robotics, including sensors, actuators, control systems, and robot programming.
    • Activities: Robotics lab exercises, programming assignments, and design of simple robotic systems.
  • Elective (e.g., Business Fundamentals, Philosophy)
    • Objective: Allow students to explore additional disciplines that complement their understanding of technology’s societal role.
    • Content and Activities: Depending on the elective chosen, students could study the basics of business management, ethical philosophy, or other areas that broaden their educational experience.

This year builds on the foundational knowledge acquired in Year 1, expanding students’ understanding of the interaction between technology and various facets of society and the environment. The curriculum is designed to foster an interdisciplinary approach, encouraging students to apply technology in solving complex societal problems while considering ethical, environmental, and policy implications.

Year 3 delves into advanced topics such as cyber-physical systems security, cognitive science, healthcare robotics, and the societal implications of technology, preparing students for the complex ethical and practical challenges they will face in the tech-driven world.

Year 3: Advanced Topics and Applications

Fall Semester

  • Cyber-Physical Systems Security
    • Objective: Understand the security challenges and strategies of cyber-physical systems, which integrate physical processes with networked computing.
    • Content: Principles of cybersecurity, vulnerabilities of cyber-physical systems, security technologies, and case studies on securing infrastructure.
    • Activities: Simulations of cyber-attacks, design of security solutions, analysis of recent cybersecurity incidents.
  • Cognitive Science and Artificial Intelligence
    • Objective: Explore the intersection of cognitive science and AI, focusing on how AI models can replicate or augment human mental processes.
    • Content: Basics of cognitive science, neural networks, natural language processing, and cognitive robotics.
    • Activities: Projects developing simple AI models, discussions on AI and cognition, and critiques of AI’s role in understanding the human mind.
  • Robotics in Healthcare
    • Objective: Examine the application and implications of robotics in healthcare, including surgery, rehabilitation, and patient care.
    • Content: Types of healthcare robots, ethical considerations, patient safety, and robotics case studies in clinical settings.
    • Activities include evaluating robotic healthcare technologies, guest lectures from healthcare professionals, and designing proposals for new healthcare robotics applications.
  • Elective (e.g., Advanced Programming, Machine Learning)
    • Objective: Provide in-depth technical skills relevant to the student’s interests and career goals.
    • Content and Activities: Depending on the elective, students could engage in advanced software development projects, machine learning model building, or other specialized technical tasks.

Spring Semester

  • Technology and Society
    • Objective: Delve into the complex relationship between technology and societal development, focusing on historical and contemporary perspectives.
    • Content: Technology’s role in social change, digital culture, technology and inequality, and future predictions.
    • Activities: Research papers on technology’s societal impacts, seminars with technology thought leaders and collaborative group projects.
  • Ethical AI
    • Objective: Address the ethical dimensions of AI development and use, including bias, transparency, and accountability.
    • Content: Ethical frameworks for AI, case studies of AI ethics in practice, regulation and policy implications.
    • Activities: Ethical audits of AI systems, debates on AI ethics topics, and development of ethical guidelines for AI projects.
  • Project-Based Learning in AI and Robotics
    • Objective: Apply knowledge and skills in AI and robotics to a real-world or simulated project.
    • Content: Project management, teamwork, technical development, and project presentation.
    • Activities: Team projects from conception to demonstration, project reports, presentations to peers and faculty.
  • Elective (e.g., Innovation and Entrepreneurship, Advanced Data Science)
    • Objective: Expand students’ abilities to innovate and apply data science techniques in various contexts.
    • Content and Activities: Depending on the elective chosen, students could create a startup business plan, engage in advanced statistical analysis, or explore innovative technology solutions.

This third year is crucial for deepening students’ technical expertise and understanding of the broader implications of technology. It emphasizes applying theoretical knowledge to practical and ethical challenges, preparing students for advanced study, research, or professional careers in technology and society.

Year 4 culminates in a capstone project that integrates the knowledge and skills acquired throughout the program alongside courses in professional development and electives that allow for specialization in areas of personal interest.

Year 4: Specialization and Integration

Fall Semester

  • Capstone Project I
    • Objective: Begin a comprehensive project integrating knowledge and skills acquired throughout the major. The project should address a significant issue at the intersection of technology, society, and ethics.
    • Content: Project proposal development, literature review, project planning, and initial implementation.
    • Activities: Weekly project meetings, progress presentations, peer feedback sessions.
  • Seminar on Current Topics in Technology and Society
    • Objective: Engage with cutting-edge discussions on the impacts of technology on society, including emerging trends and challenges.
    • Content: Guest lectures, current articles and case studies, and student-led seminars on topics of interest.
    • Activities: Participate in discussions, present seminar topics, and write reflective essays.
  • Elective in Area of Specialization (e.g., Advanced Robotics, AI in Finance)
    • Objective: Deepen technical knowledge and skills in a specific area of interest related to technology and society.
    • Content and Activities: Depending on the elective, coursework could involve advanced technical training, project work, and industry or research applications.
  • Elective (General Education or Free Elective)
    • Objective: Offer a final opportunity to explore interests outside the major or to complement the major with additional skills or knowledge.
    • Content and Activities: Varied, depending on the student’s interests and the offerings available, such as arts, humanities, social sciences, or additional technical electives.

Spring Semester

  • Capstone Project II
    • Objective: Complete and present the capstone project, demonstrating the integration of technical skills, ethical considerations, and societal impact.
    • Content: Final implementation, analysis, and evaluation of the project results.
    • Activities: Public project presentation, final report submission, peer and faculty feedback.
  • Professional Development in Technology
    • Objective: Prepare for career success in the technology sector, including job search strategies, professional networking, and life-long learning skills.
    • Content: Resume building, interview skills, professional ethics, continuing education opportunities.
    • Activities: Workshops, mock interviews, networking events, alum panels.
  • Elective in Area of Specialization (e.g., Ethical Hacking, Digital Marketing)
    • Objective: Continue to build expertise in a chosen specialization, preparing for specific career paths or advanced study.
    • Content and Activities: Advanced coursework and projects tailored to the specialization, potentially including certifications, competitions, or collaborations with industry partners.
  • Elective (General Education or Free Elective)
    • Objective: Complete the undergraduate experience with a course that broadens perspectives or enhances personal and professional skills.
    • Content and Activities: Options could include advanced study in a foreign language, leadership development, creative arts, or other areas of personal interest.

The final year is designed to culminate the interdisciplinary learning experience, with a significant focus on the capstone project that embodies the student’s understanding and application of technology in society. It also emphasizes professional development and specialization, preparing students to transition from academic study to career or further education. This curriculum equips graduates with a comprehensive skill set that is both technically proficient and ethically aware, ready to tackle the challenges of a rapidly evolving technological landscape.

Graduates of the Technology and Human Society major will be uniquely equipped to contribute to various fields, from technology development and policy to digital humanities and environmental sustainability. They will possess the critical thinking, ethical reasoning, and technical skills necessary to navigate and shape the future of our increasingly digital world.

AI’s role in higher education is inevitable; it’s already integrated into all aspects of the campus, from teaching and learning to research and administrative operations. What is not inevitable is the utility and effectiveness of AI in the future world where our students will live, learn, and work. Higher education must consider the following questions to prepare students to live, work, and learn effectively in AI.

  • How can we equip our students to navigate a life enriched by artificial intelligence in their living, working, and learning environments? 
  • In this rapidly evolving landscape, how do we redefine the pedagogical frameworks of higher education to empower our students as discerning consumers and transformative agents of their futures?
  • How could a major or discipline that promotes competencies prepare our students to live, work, and learn alongside intelligent machines? 


  • Archives

  • Copyright © 1996-2010 Joe Sabado - Higher Education & Technology Leadership. All rights reserved.
    iDream theme by Templates Next | Powered by WordPress