## Today's Work
**Goal:** Tune Sharpness-Aware Minimization (SAM) hyperparameters for the Gated Cross-Attention deepfake detection model.
**Experiments:**
| Config | EER (%) | Notes |
|--------|---------|-------|
| Baseline (Adam) | 3.1 | — |
| SAM ρ=0.05 | 2.7 | slight improvement |
| SAM ρ=0.10 | 2.3 | best so far |
| SAM ρ=0.20 | 2.6 | over-smoothed |
**Key finding:** ρ=0.10 gives the best trade-off between sharpness regularization and convergence speed. The gated cross-attention module benefits most — attention weights become more stable across speakers.
**Next steps:**
- Run ablation on semantic vs. acoustic branch contribution
- Try data augmentation with codec simulation (MP3 / Opus)
## 今日工作
**目标:** 调整门控交叉注意力深度伪造检测模型中的 SAM 优化器超参数。
**实验结果:**
| 配置 | EER (%) | 备注 |
|------|---------|------|
| 基线(Adam) | 3.1 | — |
| SAM ρ=0.05 | 2.7 | 略有提升 |
| SAM ρ=0.10 | 2.3 | 目前最优 |
| SAM ρ=0.20 | 2.6 | 过度平滑 |
**核心发现:** ρ=0.10 在锐度正则化和收敛速度之间取得最佳平衡。门控交叉注意力模块受益最大——跨说话人时注意力权重更稳定。
**下一步:**
- 对语义分支与声学分支的贡献做消融实验
- 尝试编解码模拟数据增强(MP3 / Opus)