Phase 2
Visualize how LLMs focus on different parts of text to understand context
Attention is a key mechanism in transformer models that allows them to focus on different parts of the input when generating each word of output.