Myrtle.ai Introduces Revolutionary Microsecond ML Inference Latency
Myrtle.ai, a leading innovator in the field of machine learning (ML) acceleration, has unveiled its latest breakthrough: the VOLLO® inference accelerator. Now compatible with Napatech's NT400D1x series of SmartNICs, this new technology allows businesses to achieve microsecond latencies in ML inference, setting a new benchmark in the industry.
This significant enhancement comes as organizations across various sectors, such as financial services, telecommunications, and cybersecurity, seek faster and more efficient ways to implement ML algorithms. The VOLLO inference accelerator is designed to operate in close proximity to the network within SmartNICs, which enables organizations to execute inference tasks at unprecedented speeds. These speeds can reach below one microsecond, a critical requirement for applications requiring instant decision-making.
Peter Baldwin, the CEO of Myrtle.ai, expressed enthusiasm about the partnership with Napatech, stating, "We're excited to be working with the world leader in SmartNIC sales to enable unprecedented low latencies for ML inference." He emphasized that this release aligns with their customers' demand for ever-lower latencies and harnesses the full potential of VOLLO's capabilities.
The VOLLO accelerator isn't limited to a single type of ML model; it boasts compatibility with a wide array of frameworks including LSTM (Long Short-Term Memory), CNN (Convolutional Neural Networks), MLP (Multi-Layer Perceptron), Random Forests, and Gradient Boosting decision trees. This flexibility opens doors for diverse use cases spanning sectors such as financial trading, network management, and safety applications, where low latencies can translate into significant advantages in operational efficiency and security.
Jarrod J.S. Siket, Chief Product Marketing Officer at Napatech, shared his excitement about the synergy between the two companies, highlighting the real benefits it brings to financial markets, particularly in the adoption of ML for automated trading strategies. He remarked, "The VOLLO compiler is designed to make it very easy for ML developers to use our SmartNICs and this really strengthens our portfolio of products and services."
For developers and companies eager to harness the potential of machine learning inference at microsecond latencies, Myrtle.ai has made the VOLLO compiler readily available for download at
vollo.myrtle.ai. This resource enables developers to experiment and fully understand the latencies achievable with their models when deployed on the NT400D1x series of Napatech SmartNICs.
Myrtle.ai, with its expertise extending across a variety of ML networks, delivers world-class inference technologies by leveraging FPGA-based platforms from leading suppliers. They’ve created accelerators targeted at FinTech, Speech Processing, and Recommendation systems, showcasing a commitment to enhancing ML implementations across numerous industries.
In conclusion, Myrtle.ai’s introduction of the VOLLO inference accelerator represents a monumental shift in machine learning application capabilities, paving the way for a future where instantaneous data processing and decision-making become a reality. As this technology matures, we can expect even broader adoption and innovative applications that will drive efficiency and performance to new heights.