Groq and HUMAIN Launch Game-Changing OpenAI Models on GroqCloud
In an exciting development for the AI landscape, Groq, a pioneering firm in rapid inference, has partnered with HUMAIN, a leading AI service provider in Saudi Arabia, to announce the immediate availability of two groundbreaking OpenAI models on GroqCloud. These models, namely gpt-oss-120B and gpt-oss-20B, are set to transform the way developers utilize artificial intelligence by offering unparalleled real-time performance and local support.
Both gpt-oss-120B (Generative Pre-trained Transformer Open Source 120 Billion) and gpt-oss-20B models feature a full context length of 128K tokens, enabling them to generate responses swiftly and integrate seamlessly with server-side tools. This launch follows Groq's robust support for OpenAI’s open-source initiatives, including the extensive deployment of the Whisper model, further cementing its commitment to advancing open-source technology.
As Jonathan Ross, CEO of Groq, highlights, "OpenAI is setting a new benchmark for high-performance open-source models. Groq has been engineered for the rapid and cost-effective deployment of such models, ensuring developers worldwide can access and leverage these capabilities from day one. Our collaboration with HUMAIN enhances local access and support within Saudi Arabia, empowering regional developers to create smarter, faster solutions."
Unmatched Performance and Cost Effectiveness
The Groq platform is specifically designed to maximize the potential of OpenAI models, boasting features such as expanded context utilization and integrated tools for running code and web searches. These functionalities provide real-time relevant information while enabling complex workflows, making AI deployment more efficient than ever.
The unique Groq architecture delivers exceptional speed without compromising on accuracy or cost. The gpt-oss-120B operates at an impressive speed of over 500 transactions per second, while the gpt-oss-20B reaches an astounding speed of more than 1,000 transactions per second. The pricing structure set by Groq is also highly attractive:
- - gpt-oss-120B: $0.15 per million input tokens and $0.75 per million output tokens
- - gpt-oss-20B: $0.10 per million input tokens and $0.50 per million output tokens
Moreover, for a limited time, Groq is waiving the deployment fees for tools utilized alongside the OpenAI models, encouraging developers to explore these powerful capabilities at minimal cost.
Global Reach from Day One
Thanks to its global data centers located across North America, Europe, and the Middle East, Groq ensures reliable, high-performance AI inference no matter where developers are based. With GroqCloud’s launch, these innovative OpenAI models are accessible worldwide with minimal latency, enhancing global collaboration and productivity.
About Groq
Groq is at the forefront of inference AI technology, providing a fresh perspective on cost-efficiency and performance. Its proprietary Language Processing Unit (LPU) and cloud infrastructure have been architected to execute powerful models reliably and at the lowest token costs, all while maintaining quick response times. Trusted by over 1.9 million developers, Groq is committed to helping create faster and smarter solutions.
About HUMAIN
As a public investment fund and leading AI entity, HUMAIN specializes in delivering comprehensive AI solutions across four pivotal domains. These include advanced data centers, state-of-the-art infrastructure, cutting-edge AI models, and transformative solutions intertwining deep industry expertise with practical applications. HUMAIN’s mission is to drive exponential advancements across sectors by harnessing synergies between human intellect and artificial intelligence.
With the launch of these innovative OpenAI models, Groq and HUMAIN are not just enhancing the capabilities available within Saudi Arabia but are also contributing significantly to the global AI ecosystem.