NABLAS Corporation Unveils NABLA-VL
NABLAS Corporation, a pioneering AI research institute based in Bunkyo, Tokyo, has officially announced the launch of its large-scale vision-language model (VLM), named
NABLA-VL. This exceptional model, comprising 15 billion parameters, demonstrates an advanced ability to understand text, images, and videos while effectively functioning in both Japanese and English. The inception of NABLA-VL is part of a nationwide project funded by the Japanese Ministry of Economy, Trade and Industry and NEDO, known as the
GENIAC (Generative AI Accelerator Challenge).
Features of NABLA-VL
NABLA-VL stands out due to its impressive characteristics that cater to various research and industrial applications:
- - Accelerated Learning and Inference: By implementing a token compression technique, the model achieves a significant reduction in the amount of data processed. It removes 87.5% of visual tokens, resulting in a 50% decrease in learning time and a 23% reduction in inference time. This substantial speed enhancement effectively reduces operational and development costs.
- - Leading Performance in Benchmarks: NABLA-VL has consistently outperformed other domestic AI models in several cross-linguistic benchmarks as of May 2025, indicating its status as a reliable foundational model with high accuracy and versatility.
- - Open Source Availability: The entire model and its learning/inference code are made available under the Apache 2.0 license, allowing researchers and developers to easily access and utilize the technology for various applications.
Accessibility and Further Information
The source code for NABLA-VL is hosted on
Hugging Face. For technical details, users can refer to the company’s technical blog, which provides in-depth explanations and resources:
Outstanding Benchmark Results
As of May 2025, NABLA-VL achieved top scores in numerous pivotal English and Japanese benchmarks. For example, it received the highest score among domestic models in major evaluations like
MMMU and
LLaVA-Bench (In-the-Wild). NABLA-VL even surpassed some benchmarks compared to OpenAI's
gpt-4o-2024-11-20. The resulting scores exhibit NABLA-VL’s groundbreaking capabilities in the field of AI language models.
| Benchmark | Score | Notes |
|---|
| --- | - | - |
| JMMMU | 45.68 | Japanese version of MMMU |
| JDocQA | 29.16 | Document QA including charts |
| MECHA | 59.63 | Benchmark related to Japanese land and events |
| MMMU | 51.11 | QA requiring undergraduate-level knowledge |
| JVB-ItW | 4.06 | Japanese version of LLaVA-Bench-In-the-Wild |
| VG-VQA | 3.97 | Benchmark using Visual Genome dataset |
| MulIm-VQA | 4.27 | Multiple image benchmark |
Scores are based on evaluations from llm-jp-eval-mm. The JMMMU benchmark is notable for its role in assessing large multimodal models in Japanese.
Future Endeavors
NABLAS Corporation remains committed to advancing research and development in foundational models, aiming to enhance the integration of visual and linguistic elements in societal applications. The organization will continue collaborating with the research community and the industry to contribute technologically, focusing on lightweight models, real-time inference, and high-resolution image understanding.
Contact Us
For inquiries regarding the fundamental vision-language model NABLA-VL, please feel free to contact us through our inquiry form linked on our website.
About NABLAS Corporation
Founded in March 2017, NABLAS Corporation is a venture spun out from the University of Tokyo, specializing in AI education, R&D, and consulting services. The company strives to implement AI technologies socioeconomically and contribute to a better future through inventive solutions. Their mission centers on discovering gradients toward the future.
Company Profile:
- - Name: NABLAS Corporation
- - CEO: Kotaro Nakayama
- - Headquarters: 6-17-9 Hongo, Bunkyo, Tokyo
- - Business: AI Talent Development, Consulting, R&D
- - URL: NABLAS Website