by @RachidAR
RuAR
RachidAR
AI & ML interests
1.58 bit LLM
Recent Activity
liked a model 1 day ago
unsloth/gemma-4-26B-A4B-it-GGUF liked a model 4 months ago
Tongyi-MAI/Z-Image-Turbo liked a model 4 months ago
alibaba-pai/Z-Image-Turbo-Fun-Controlnet-UnionOrganizations
Papers
-
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Paper • 2402.17764 • Published • 628 -
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Paper • 2403.03507 • Published • 189 -
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
Paper • 2402.19427 • Published • 57 -
ResLoRA: Identity Residual Mapping in Low-Rank Adaption
Paper • 2402.18039 • Published • 11
SOTA architecture
Ternary LLMs & Knowledge distillation & SOTA
-
Addition is All You Need for Energy-efficient Language Models
Paper • 2410.00907 • Published • 151 -
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Paper • 2402.17764 • Published • 628 -
LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding
Paper • 2404.16710 • Published • 80 -
Beyond Scaling Laws: Understanding Transformer Performance with Associative Memory
Paper • 2405.08707 • Published • 34
RWKV-GGUF
Fine-Tuned or Trained models
by @RachidAR
Ternary LLMs & Knowledge distillation & SOTA
-
Addition is All You Need for Energy-efficient Language Models
Paper • 2410.00907 • Published • 151 -
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Paper • 2402.17764 • Published • 628 -
LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding
Paper • 2404.16710 • Published • 80 -
Beyond Scaling Laws: Understanding Transformer Performance with Associative Memory
Paper • 2405.08707 • Published • 34
Papers
-
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Paper • 2402.17764 • Published • 628 -
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Paper • 2403.03507 • Published • 189 -
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
Paper • 2402.19427 • Published • 57 -
ResLoRA: Identity Residual Mapping in Low-Rank Adaption
Paper • 2402.18039 • Published • 11
RWKV-GGUF
SOTA architecture