Pathway to Skilled, Responsible AI.

A future-ready pathway that builds foundational skills in generative AI, prompt engineering, and responsible AI use—preparing students to thrive in an AI-powered workforce.

High School & Career Center

Building Skilled and Responsible AI Professionals.

AI Foundations Pathway

The AI Foundations Pathway introduces students to the essential knowledge and skills needed to understand, communicate with, and responsibly use modern artificial intelligence systems. Through two highly interactive courses—Generative AI & Prompt Engineering and AI Ethics & Responsible Use—students begin building mastery in the newest, most in-demand skill in the AI industry: prompt engineering. In AI Ethics & Responsible Use, students examine the risks, responsibilities, and societal impacts of AI, learning how to evaluate bias, validate outputs, and apply responsible AI practices. As organizations worldwide accelerate their adoption of AI, prompt engineering and ethical AI literacy have become critical competencies, positioning students for success in a rapidly evolving, AI-powered workforce.

Prompt Engineering. The skill for unlocking AI’s potential.

Prompt engineering empowers learners to direct generative AI models such as ChatGPT, Google Gemini and Microsoft Copilot—where precision-designed inputs shape meaningful outcomes. With carefully crafted prompts, structured context and iterative refinement, users steer AI to generate text, imagery and insights tailored to real-world needs. This skill has become indispensable across every industry—from healthcare and finance to retail, education and manufacturing—enabling teams to amplify productivity, personalize services and accelerate innovation. Its blend of clear language, context-aware design and rapid iteration makes it the key skill for building conversational agents, automations and emerging AI-powered solutions. From research labs to startup teams and enterprise workflows, prompt engineering is the foundational capability students need to participate in today’s AI-driven economy.

Applied AI Foundations Pathway

These two dynamic courses empower students to master prompt-engineering techniques for generative AI while cultivating a deep understanding of AI ethics and responsible innovation—equipping them with the fluency, insight and adaptive skill-set needed for thoughtful, real-world AI practice.

Course 1: Generative AI & Prompt Engineering

Course Description

Generative AI & Prompt Engineering is the entry point to our Applied AI Foundations Pathway—a dynamic, hands-on, project-driven course where students learn and begin building mastery in the newest, most in-demand skill in the AI industry—Prompt Engineering. This course prepares students to earn the Certiport Generative AI Fundamentals industry certification, giving them a powerful early credential that validates their ability to communicate effectively with Generative AI systems such as ChatGPT, Google Gemini, and Microsoft Copilot. As organizations worldwide accelerate their adoption of AI, prompt engineering has emerged as a critical skill across every industry—including technology, business, healthcare, finance, education, marketing, government, and the creative arts—making this course an invaluable early advantage for students.

Students explore how generative AI systems work, but the heart of the course is mastering the craft of writing powerful prompts. Learners practice industry-ready techniques—including zero-shot, few-shot, contextual, chain-of-thought, persona-based, and multimodal prompting—and discover how subtle changes to wording, structure, constraints, and examples dramatically influence AI output quality. They create prompts that generate text, images, audio, and video; refine and troubleshoot AI responses; reduce bias; and optimize clarity, accuracy, and reliability.

By the end of the course, students can write sophisticated prompts across multiple AI models, analyze and improve AI outputs, and apply the structured methodologies used by professional prompt engineers—fully prepared to earn the Certiport Generative AI Fundamentals certification and advance into Course 2: AI Ethics.

Industry Certification

AI Concepts
  • Artificial intelligence fundamentals
  • Machine learning concepts
  • Deep learning and neural networks
  • Generative AI models
  • Large language models (LLMs)
  • Model training and inference basics
  • Rule-based vs. learning-based systems
  • Text-based AI interfaces
  • Zero-shot prompting
  • Few-shot prompting
  • Contextual prompting
  • Chain-of-thought prompting
  • Role-based prompting
  • Prompt chaining
  • Goal-first prompting
  • Tree-of-thought prompting
  • Persona-based prompting
  • Multimodal prompting
  • Image generation prompting
  • Audio generation prompting
  • Video generation prompting
  • Prompt structure and formatting
  • Constraint-based prompting
  • Bias reduction strategies
  • Output analysis and evaluation
  • Prompt troubleshooting techniques
  • Prompt modifiers and refinements
  • Interaction management with AI models
  • Ethical and responsible prompt use
  • Building prompt libraries
Outcomes
  • Explain how artificial intelligence, machine learning, and generative models operate.
  • Describe how large language models learn from data and generate outputs.
  • Write clear and effective prompts for text-based AI systems.
  • Apply zero-shot, few-shot, contextual, and chain-of-thought prompting techniques.
  • Use role-based, goal-first, persona-based, and tree-of-thought prompting strategies.
  • Create structured prompt chains to guide complex AI behavior.
  • Write multimodal prompts for generating images, audio, and video.
  • Analyze AI outputs for accuracy, clarity, bias, and reliability.
  • Refine and troubleshoot prompts to improve model responses.
  • Apply ethical and responsible prompting practices when generating AI content.
  • Manage interactions with AI tools to achieve consistent, high-quality outputs.
  • Build a library of prompts tailored to different tasks and use cases.
  • Compare the strengths and limitations of different generative AI models and platforms.
  • Identify common failure modes in AI-generated outputs and propose corrective strategies.
  • Incorporate constraints, examples, and structured formatting to guide model behavior.
  • Evaluate how prompt clarity, specificity, and context influence output quality.
  • Use prompt engineering to support research, writing, planning, and creative production tasks.
  • Collaborate with peers to design, test, and refine prompts for real-world scenarios.
  • Document prompt design decisions and explain the reasoning behind prompt structure and revisions.
  • Demonstrate competencies aligned with the Certiport Generative AI Fundamentals certification.

Course 2 : AI Ethics & Responsible Use.

Course Description

AI Ethics is the second course in our Applied AI Foundations Pathway—a rigorous, inquiry-driven course that teaches students how to evaluate the risks, responsibilities, and real-world implications of artificial intelligence. As AI becomes embedded across every industry, understanding how to use, design, and govern these systems responsibly has become a critical skill for the future workforce. This course gives students the essential ethical framework needed to navigate a world shaped by AI and to participate thoughtfully in its continued growth.

Students explore how ethical considerations influence both AI model development and AI model use, examining topics such as bias, misinformation, transparency, privacy, accountability, environmental impact, access and equity, and legal and policy gaps. Through real-world case studies and hands-on analysis activities, students learn how to identify harm, evaluate system risks, validate outputs, and design responsible AI workflows.

By the end of the course, students can assess the ethical risks of AI systems, apply strategies to reduce bias and misinformation, evaluate the societal impacts of AI, and articulate best practices for responsible AI use—fully prepared to apply ethical reasoning in all future AI and data science coursework and real-world scenarios.

AI Concepts
  • Ethical principles in artificial intelligence
  • Fairness, accountability, and transparency
  • Bias and misinformation risks
  • Data ethics and responsible data use
  • Privacy and data rights
  • Environmental impact of AI systems
  • Model transparency and explainability
  • Algorithmic discrimination
  • Ethical considerations in model development
  • Ethical considerations in model deployment
  • Responsible AI use practices
  • Misuse, harm, and prevention strategies
  • Output validation and fact-checking
  • Overreliance on AI and critical thinking
  • Access, equity, and inclusion in AI
  • Legal, regulatory, and policy gaps
  • AI governance frameworks
  • Security and adversarial risks
  • Human oversight and user responsibilities
  • Strategies for promoting ethical AI use
Outcomes
  • Explain key ethical principles that guide responsible AI development and use.
  • Describe how bias, misinformation, and discrimination can emerge in AI systems.
  • Evaluate datasets and models for fairness, representation, and potential harm.
  • Analyze how transparency, accountability, and explainability influence AI trustworthiness.
  • Assess the environmental and societal impacts of AI model development.
  • Apply privacy and data-rights principles when working with AI tools.
  • Identify ethical risks that occur during AI deployment and everyday use.
  • Recognize misuse scenarios and propose strategies to prevent AI-enabled harm.
  • Validate AI outputs through fact-checking, cross-referencing, and critical evaluation.
  • Detect overreliance on AI systems and apply critical-thinking strategies to mitigate it.
  • Examine issues of access, equity, and inclusion in AI technologies.
  • Interpret legal, regulatory, and policy gaps that shape AI governance.
  • Apply responsible AI use practices in academic, personal, and professional contexts.
  • Evaluate security risks, adversarial attacks, and vulnerabilities in AI systems.
  • Describe the role of human oversight and user responsibility in AI workflows.
  • Identify governance frameworks used to regulate AI development and deployment.
  • Document ethical considerations when designing or using AI systems.
  • Collaborate with peers to analyze real-world AI case studies and propose ethical solutions.
  • Communicate ethical risks and recommendations to technical and non-technical audiences.
  • Demonstrate the ability to apply ethical reasoning to AI systems, tools, and workflows.

Take the first step.

Let’s talk about how we can help.