LG Unveils Cutting-Edge Multimodal AI EXAONE 4.5 with Advanced Reasoning Capabilities
LG Unveils EXAONE 4.5: A Multimodal AI Revolution
On April 9, 2026, LG AI Research officially revealed its latest advancement in artificial intelligence, the
EXAONE 4.5. This cutting-edge multimodal AI model is adept at interpreting and reasoning across both text and images, setting a new benchmark in AI capabilities.
The Evolution of EXAONE
Building on the foundation laid with the release of EXAONE 1.0 back in December 2021, EXAONE 4.5 is a sophisticated Vision-Language Model (VLM). This model integrates a proprietary vision encoder seamlessly with a Large Language Model (LLM) under a unified architecture. The evolution from EXAONE 1.0 to 4.5 marks significant strides towards LG's venture, dubbed 'K-EXAONE', which aims to create a robust AI foundation model.
As part of the ongoing project, the second phase was successfully completed in August of this year. Plans for the third phase are already in motion, with ambitions of transforming EXAONE from a simple AI model into one of 'Physical Intelligence'—capable of understanding and making decisions in real-world environments, rather than being restricted to virtual spaces.
Benchmarking Performance
the unveiling also included impressive benchmark results. EXAONE 4.5 demonstrated superior visual processing abilities, achieving an average score of 77.3 across five significant STEM benchmarks—137.4 in total—outshining competitors such as OpenAI's GPT-5-mini, Anthropic's Claude 4.5 Sonnet, and Alibaba's Qwen-3 235B. This shows that EXAONE 4.5 excels in complex document understanding, making it particularly valuable in industrial contexts.
The model's ability to analyze and reason through intricate documents—including contracts and financial statements—highlights its potential utility across various sectors. Significantly, in coding tasks, it outperformed Google’s latest model, achieving an impressive 81.4 on LiveCodeBench v6.
Operational Efficiency and Language Support
Despite housing 33 billion parameters—approximately one-seventh of the 'K-EXAONE' model—EXAONE 4.5 maintains impressive text comprehension and reasoning capabilities. Such efficiency is attributed to LG AI Research’s unique Hybrid Attention architecture and advanced technologies for swift inference based on multi-token predictions.
In a bid to enhance global accessibility, LG has also expanded the model's language support beyond Korean and English, now including Spanish, German, Japanese, and Vietnamese.
Cultivating an AI Research Ecosystem
LG AI Research has been proactive in fostering an ecosystem of AI research, a commitment underscored by the decision to release EXAONE 4.5 on the popular open-source platform, Hugging Face. This initiative permits usage across research, academia, and educational contexts, encouraging collaborative development in AI applications.
Their recent event, the 'LG Aimers' Hackathon, signals a commitment to nurturing young talent in AI. During this program, participants focused on creating lightweight variants of the EXAONE model, ensuring a new generation of expertise in AI development.
Committing to Cultural Sensitivity
Additionally, LG is dedicated to making EXAONE a model that appreciates the intricacies of Korean culture and history. By incorporating quality data from the Northeast Asian History Foundation, LG AI Research emphasizes the importance of cultural context in AI development. They point out that the growing number of AI models capable of conversing in Korean does not automatically confer cultural awareness.
As stated by Myoungshin Kim, Head of AI Safety and Trust at LG,