#AI horizons 25-11 – Kimi K2 Thinking and the New AI Balance of Power
Analysis of China's Kimi K2 Thinking AI model, a low-cost, open-weight model challenging US dominance in reasoning and agentic tasks.
Analysis of China's Kimi K2 Thinking AI model, a low-cost, open-weight model challenging US dominance in reasoning and agentic tasks.
Anthropic's internal 'soul document' used to train Claude 4.5 Opus's personality and values has been confirmed and partially revealed.
Analyzes the use of reinforcement learning to enhance reasoning capabilities in large language models (LLMs) like GPT-4.5 and o3.
Explains the concept and purpose of input masking in LLM fine-tuning, using a practical example with Axolotl for a code PR classification task.
Learn techniques to speed up PyTorch model training by 8x using PyTorch Lightning, maintaining accuracy while reducing training time.
Learn how to deploy a deep learning research demo on the cloud using the Lightning framework, including GPU training and model sharing.