- Qwen-VL: A Versatile Vision-Language Model for Understanding . . .
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images Starting from the Qwen-LM as a
- Gated Attention for Large Language Models: Non-linearity, Sparsity,. . .
The authors response that they will add experiments in QWen architecture, give the hyperparameters, and promise to open-source one of the models Reviewer bMKL is the only reviewer to initially score the paper in the negative region (Borderline reject) They have some doubts on the experimental section
- Q -VL: A VERSATILE V M FOR UNDERSTANDING, L ING AND EYOND QWEN-VL: A . . .
In this paper, we explore a way out and present the newest members of the open-sourced Qwen fam-ilies: Qwen-VL series Qwen-VLs are a series of highly performant and versatile vision-language foundation models based on Qwen-7B (Qwen, 2023) language model We empower the LLM base-ment with visual capacity by introducing a new visual receptor including a language-aligned visual encoder and a
- SAM-Veteran: An MLLM-Based Human-like SAM Agent for Reasoning. . .
For Qwen+SAM, we report the results of generating boxes for SAM For Seg-Zero, the MLLM outputs both the bounding boxes and the points for SAM in a single step, whereas SegAgent adopts a fixed number of 7 refinement iterations for mask prediction
- Mamba-3: Improved Sequence Modeling using State Space Principles
This submission introduces Mamba-3, an “inference-first” state-space linear-time sequence model that aims to improve over prior sub-quadratic backbones (notably Mamba-2 and Gated DeltaNet) along three dimensions: modeling quality, state-tracking capability, and real-world decode efficiency The core methodological contributions are: Generalized trapezoidal discretization to improve
- AgentFold: Long-Horizon Web Agents with Proactive Context Folding
LLM-based web agents show immense promise for information seeking, yet their effectiveness on long-horizon tasks is hindered by a fundamental trade-off in context management Prevailing ReAct-based
- TwinFlow: Realizing One-step Generation on Large Models with. . .
Qwen-Image-Lightning is 1 step leader on the DPG benchmark and should be marked like this in Table 2 Distillation Fine Tuning vs Full training method: Qwen-Image-TwinFlow (and possibly also TwinFlow-0 6B and TwinFlow-1 6B, see question below) leverages a pretrained model that is fine-tuned
- Zihan Qiu - OpenReview
Zihan Qiu, Zekun Wang, Bo Zheng, Zeyu Huang, Kaiyue Wen, Songlin Yang, Rui Men, Le Yu, Fei Huang, Suozhi Huang, Dayiheng Liu, Jingren Zhou, Junyang Lin
- Junyang Lin - OpenReview
Promoting openness in scientific communication and the peer-review process
- LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation
LLaVA-MoD introduces a framework for creating efficient small-scale multimodal language models through knowledge distillation from larger models The approach tackles two key challenges: optimizing network structure through sparse Mixture of Experts (MoE) architecture, and implementing a progressive knowledge transfer strategy This strategy combines mimic distillation, which transfers general
|