Groq and HUMAIN Collaborate to Scale OpenAI's New Models for AI Developers
Groq and HUMAIN Launch OpenAI's New Open Models Day Zero
In a significant development in the field of artificial intelligence, Groq, a notable leader in inference acceleration, has teamed up with HUMAIN, an AI services company based in Saudi Arabia, to announce the immediate availability of OpenAI's two advanced open models: gpt-oss-120B and gpt-oss-20B. This launch promises to revolutionize the way AI development is approached, especially in regions like the Kingdom of Saudi Arabia.
What’s New?
The newly launched models, now accessible via GroqCloud, come with impressive capabilities, including a full context length of 128K tokens and real-time responses. This expansion represents a foundational upgrade, stepping up from Groq's prior support of OpenAI's open-source initiatives, including the large-scale deployment of models like Whisper.
Jonathan Ross, the CEO of Groq, highlighted the revolutionary nature of OpenAI's new offerings, stating, "OpenAI is setting a new high performance standard in open source models. Groq was built to run models like this, fast and affordably, so developers can utilize them from day zero." By joining forces with HUMAIN, Groq aims to boost local access and support, fostering quicker, more innovative development in the Saudi AI landscape.
Enhanced Model Capabilities
The strength of the gpt-oss-120B and gpt-oss-20B models lies not only in their speed but also in their extended contextual capabilities. Groq's platform integrates tools like code execution for enhanced reasoning and web search functionalities, which are crucial for obtaining real-time information. This ensures that developers have the resources to create more sophisticated applications right from the start.
In practice, the gpt-oss-120B model exhibits a performance of over 500 tokens per second, while the gpt-oss-20B model performs at more than 1000 tokens per second, signaling exceptional efficiency and speed crucial for developers pushing the boundaries of AI.
Cost Efficiency of OpenAI Models
In addition to impressive performance, Groq ensures that these models come at an accessible price point, generating significant savings for developers. The pricing structure is as follows: gpt-oss-120B is offered at $0.15 per million input tokens and $0.75 per million output tokens, while gpt-oss-20B is priced at $0.10 for input and $0.50 for output tokens. Notably, for a limited period, tool calls made using OpenAI's new models will have no associated charges, further incentivizing early adoption.
Global Impact from Day Zero
What sets this launch apart is Groq's commitment to deploying these models globally from day one. With a robust data center network spanning North America, Europe, and the Middle East, Groq guarantees high-performance AI inference with minimal latency, no matter where developers are located. This ambitious rollout ensures that innovative developers can leverage OpenAI’s advanced models effectively, regardless of geographic constraints.
About Groq and HUMAIN
Groq has established itself as the leading AI inference platform, focusing on redefining performance at competitive pricing. Their custom-built chip architecture allows for instantaneous model execution, delivering predictable performance that developers have come to rely upon. With over 1.9 million developers utilizing their services, Groq stands at the forefront of AI innovation.
On the other hand, HUMAIN is recognized for its comprehensive AI services across various sectors, bolstering capabilities in both public and private domains with cutting-edge AI solutions designed to transform industries. Their continuous emphasis on developing sector-specific AI products aims to enhance competitiveness on both a global scale and within the Saudi market.
The collaboration between Groq and HUMAIN is poised to pave the way for a new wave of AI innovation, harnessing the potential of OpenAI’s models for developers keen to push the envelope of what's possible in their projects.