Summary
💻 Register for Poll
💻 Welcome to Alta3 Live
Learning Your Environment
💻 Using Vim
💻 Tmux
💻 VScode Integration
💻 Revision Control with GitHub
Deep Learning Intro
💬 What is Intelligence?
💬 Generative AI Unveiled
💬 The Transformer Model
💬 Feed Forward Neural Networks
💻 Tokenization
💻 Word embeddings
💻 Positional Encoding
Build a Transformer Model from Scratch
💬 PyTorch
💻 Construct a Tensor from a Dataset
💻 Orchestrate Tensors in Blocks and Batches
💻 Initialize PyTorch Generator Function
💻 Train the Transformer Model
💻 Apply Positional Encoding and Self-Attention
💻 Attach the Feed Forward Neural Network
💻 Build the Decoder Block
💻 Transformer Model as Code
Prompt Engineering
💬 Introduction to Prompt Engineering
💻 Getting Started with Gemini
💻 Developing Basic Prompts
💻 Intermediate Prompts: Define Task/Inputs/Outputs/Constraints/Style
💻 Advanced Prompts: Chaining, Set Role, Feedback, Examples
Hardware requirements
💬 GPUs role in AI performance (CPU vs GPU)
💬 Current GPUs and cost vs value
💬 Tensorcore vs older GPU architectures
Pre-trained LLM
💬 A History of Neural Network Architectures
💬 Introduction to the LLaMa.cpp Interface
💬 Preparing A100 for Server Operations
💻 Operate LLaMa2 Models with LLaMa.cpp
💻 Selecting Quantization Level to Meet Performance and Perplexity Requirements
💬 Running the llama.cpp Package
💻 Llama interactive mode
💻 Persistent Context with Llama
💻 Constraining Output with Grammars
💻 Deploy Llama API Server
💻 Develop LLaMa Client Application
💻 Write a Real-World AI Application using the Llama API
Fine Tuning
💻 Using PyTorch to fine tune models
💻 Advanced Prompt Engineering Techniques
Testing and Pushing Limits
💻 Maximizing Model Limits
💬 Curriculum Path: GenerativeAI