
vishal
チャンネル登録者数 121人
15 回視聴 ・ いいね ・ 2025/03/27
A video walkthrough of analysis where I compare the outputs between sequential linear layer forward passes and merged layer (of those sequential layer's weights) forward passes for different floating points. In general, float64 has the lowest error and bfloat16 and mixed precision have the highest relative error. Also, as matrix size increases the relative error increases.
Colab: colab.research.google.com/drive/1ngnfXmheHEP8tkcyu…
コメント
関連動画

My Second-Place Winning Tiny Model Hackathon Journey: Pre-Training from Scratch
25 回視聴 - 4 週間前

The Evolution of Matrix Multiplication, Part 2: PyTorch and Numba on the GPU | fastai course Part 2
8 回視聴 - 13 時間前

Part 1-Road To Learn Finetuning LLM With Custom Data-Quantization,LoRA,QLoRA Indepth Intuition
108,784 回視聴 - 1 年前

Lawrence: The Trump-Republican budget bill is the work of 'sadistic zombies'
533,737 回視聴 - 14 時間前

Building an LLM Judge Agreement App: 7 Iterations from Basic to Full Functionality
32 回視聴 - 2 週間前
使用したサーバー: wata27
再生方法の変更
動画のデフォルトの再生方法を設定できます。埋め込みで見れるなら埋め込みで見た方が良いですよ。
現在の再生方法: 通常
コメントを取得中...