Vivid Headlines

AI chip startup MatX reportedly raises $80M funding round - SiliconANGLE


AI chip startup MatX reportedly raises $80M funding round - SiliconANGLE

MatX Inc., a chip startup led by former Google LLC engineers, has reportedly raised $80 million in fresh funding.

TechCrunch today cited sources as saying that Anthropic PBC backer Spark Capital led the investment. The raise, which is described as a Series B round, reportedly values MatX at more than $300 million. The milestone comes a few months after the company raised $25 million in initial funding from a group of prominent investors.

MatX was founded in 2022 by Chief Executive Officer Reiner Pope and Chief Technology Officer Mike Gunter. The duo previously worked at Google, where they helped develop the company's TPU line of artificial intelligence processors. They also worked on other machine learning projects during their stints at the search giant.

MatX is developing chips for training AI models and performing inference, or the task of running a neural network in production after it's trained. According to the company, customers will have the ability to build machine learning clusters that contain hundreds of thousands of its chips. MatX estimates that such clusters will be capable of powering AI models with millions of simultaneous users.

The company's website states that it's prioritizing cost-efficiency over latency with its chips' design. Nevertheless, MatX expects that the processors will be "competitive" in the latter area. For AI models with 70 billion parameters, the company is promising latencies of less than a hundredth of a second per token.

MatX plans to give customers "low-level control over the hardware." Several existing AI processors, including Nvidia Corp.'s market-leading graphics cards, provide similar capabilities. They allow developers to modify how computations are carried out in a way that improves the performance of AI models.

Nvidia's chips, for example, provide low-level controls that make it easier to implement operator fusion. This is a machine learning technique that reduces the number of times an AI model must move data to and from a graphics card's memory. Such data transfers incur processing delays, which means that lowering their frequency speeds up calculations.

MatX says that another contributor to its chips' performance is their no-frills architecture. The processors lack some of some of the components included in standard GPUs, which makes it possible to add more AI-optimized circuits.

Earlier this year, MatX told Bloomberg that its chips will be at least ten times better at running large language models than Nvidia silicon. The company further claims that AI clusters powered by its silicon will be capable of running LLMs with ten trillion parameters. For smaller models with around 70 billion parameters, MatX is promising training times of a few days or weeks.

The company expects to complete the development of its first product next year.

Previous articleNext article

POPULAR CATEGORY

entertainment

11539

discovery

5173

multipurpose

12149

athletics

11947