On March 25, 2026, Axelidea Inc., a notable player under Minoru IP Group based in Osaka, proudly introduced its latest language model, AXELIDEA-QUON-14B-Japanese-v01, a cutting-edge Japanese-focused large language model (LLM) boasting an impressive 14 billion parameters. Unlike traditional LLMs, which have primarily emphasized the accumulation of knowledge, this innovative model shifts its focus towards effective knowledge integration and problem-solving methodologies.
Utilizing a vast array of patent documents as its foundational learning resource, AXELIDEA-QUON-14B has attained a significant milestone by mastering the art of deriving solutions through strategic combinations of knowledge. This approach represents a paradigm shift in how we think about language models and their applications.
The performance of AXELIDEA-QUON-14B has been validated through its outstanding results on the JA Leaderboard, where it secured the top spot among instruction-tuned models of its category, outperforming notable competitors such as Google and Microsoft. This remarkable achievement showcases its superior capabilities in understanding and responding to complex inquiries in Japanese.
Model Overview
- - Model Name: AXELIDEA-QUON-14B-Japanese-v01
- - Parameter Count: 14 billion
- - Base Model: shisa-ai/shisa-v2.1-unphi4-14b
- - License: MIT License
You can download AXELIDEA-QUON-14B from its official
Hugging Face page.
Benchmarking Excellence
AXELIDEA-QUON-14B has achieved an impressive average score of 71.99 on the Japanese language evaluation across seven tasks on the JA Leaderboard, demonstrating its significant advancements over other models, including:
- - shisa-v2.1-unphi4-14b: 71.44
- - Gemma-3-12B-IT (Google): 63.42
- - Phi-4 (Microsoft): 59.30
- - Sarashina2-13B (SB Intuitions): 56.43
All models were evaluated under identical conditions, ensuring fair comparisons.
Background Challenges
As advancements in LLM technologies continue at a rapid pace, the dual challenge of achieving high accuracy in Japanese knowledge comprehension while fostering creative thinking remains paramount. Recognizing creativity as an inherently subjective capability adds layers of complexity to the evaluation processes, compounded by the risk of catastrophic forgetting during the learning of new tasks.
Axelidea has successfully tackled these issues by implementing a unique five-dimensional creativity evaluation system alongside innovative knowledge-preserving fine-tuning techniques.
Innovative Technologies Behind AXELIDEA-QUON-14B
1.
Diverse Creativity through Expert Teams: Based on Torrance's Creative Thinking Tests (TTCT), a total of 60 specialized experts were trained across four creativity categories and 15 sub-domains. This diverse approach allows the model to encompass various facets of creative output.
2.
Five-Dimensional Creativity Reward Model (QUON-CreativityBench): This proprietary heuristic quality scoring system evaluates training data sourced from patents based on criteria such as originality, elaboration, feasibility, fluency, and flexibility, ensuring high-quality training data selection from vast patent databases.
3.
Quantum Computing for Expert Selection: Utilizing Quadratic Unconstrained Binary Optimization (QUBO), the model identifies the optimal expert combinations from 60 candidates, efficiently managing numerous objectives like quality, team diversity, and balance in a single energy function. This lays a foundation for scalable implementations, should the number of experts increase significantly.
4.
Knowledge-Preserving Fine-Tuning: Overcoming catastrophic forgetting is a major feat. The model effectively retains common knowledge while acquiring creativity by incorporating techniques that protect essential facts during fine-tuning.
Design Philosophy: Enhancing Knowledge Integration
As famously stated by James W. Young, "An idea is simply a new combination of existing elements." AXELIDEA-QUON-14B enhances existing knowledge bases and strengthens creative thinking patterns via refined attention mechanisms.
Where traditional LLM fine-tuning has focused primarily on the breadth of knowledge, this model learns the subtleties of combining knowledge elements, honing in on attention patterns that tackle problem-solving as demonstrated in patent literature.
Recognition and Awards
On March 24, 2026, Axelidea was honored with the Special Award for Regional Contributions at the GENIAC-PRIZE, a venture supported by the Ministry of Economy, Trade and Industry and NEDO, aimed at promoting research and societal implementation in the field of generative AI. This accolade is awarded to companies addressing unique regional challenges through innovative practices.
For more information on the GENIAC-PRIZE, visit
GENIAC's official website.
About Axelidea Inc.
Axelidea Inc. operates with offices in Osaka and Tokyo, focusing on transformative AI technologies. For more details about the company, visit their official site at
axelidea.com.
Acknowledgments
The researchers also express gratitude for the use of the supercomputer TSUBAME 4.0 at Tokyo Science University in contributing to this research.