Mon. Aug 8th, 2022

Machine learning can give companies a competitive advantage by using the data they collect, e.g., purchasing patterns, to generate predictions that drive revenue-generating products (e.g., e-commerce recommendations). But it’s hard for any employee to keep up with – let alone manage – the vast amounts of data that are being created. That poses a problem, as AI systems tend to make superior predictions when given up-to-date data. Systems that are not regularly updated on new data run the risk of becoming ‘old’ and less accurate over time.

Fortunately, an emerging set of practices called “MLOps” promises to simplify the process of entering data into systems by removing complexity. One of the proponents is Mike Del Balso, the CEO of Tecton. Del Balso co-founded Tecton while working at Uber as the company struggled to build and deploy new machine learning models.

“Models equipped with highly refined real-time functions can provide much more accurate predictions. But building data pipelines to generate these features is difficult, requires a lot of data engineering manpower, and can add weeks or months to project lead times,” Del Balso told BestFitnessBands in an email interview.

Del Balso, who previously led the machine learning teams for search ads at Google, launched Tecton in 2019 along with Jeremy Hermann and Kevin Stumpf, two former Uber colleagues. At Uber, the trio had created Michelangelo, an AI platform that Uber used internally to generate market forecasts, calculate ETAs, and automate fraud detection, among other use cases.

Michelangelo’s success inspired Del Balso, Hermann and Stumpf to create a commercial version of the technology, which became Tecton. Investors followed suit. To illustrate, Tecton announced today that it has raised $100 million in a Series C round, bringing the company’s total to $160 million. The tranche was led by Kleiner Perkins, with the participation of Databricks, Snowflake, Andreessen Horowitz, Sequoia Capital, Bain Capital Ventures and Tiger Global. Del Balso says it will be used to scale up Tecton’s engineering and go-to-market teams.

“We expect the software we use today to be very personal and intelligent,” Kleiner Perkins partner Bucky Moore said in a statement to BestFitnessBands. “While machine learning makes this possible, it remains far from a reality, as the supporting infrastructure is prohibitively difficult to build for all but the most advanced companies. Tecton is making this infrastructure accessible to any team, enabling them to build machine learning apps faster.”

tecton

Tecton’s monitoring dashboard. Image Credits: tecton

At a high level, Tecton automates the process of building functions using real-time data sources. ‘Functions’ in machine learning are individual independent variables that act as inputs into an AI system. Systems use functions to make their predictions.

†[Automation,] enables companies to deploy real-time machine learning models much faster with less data engineering effort,” said Del Balso. “It also enables companies to make more accurate predictions. This in turn can be translated directly into the result, for example by increasing the fraud detection rate or by providing better product recommendations.”

In addition to orchestrating data pipelines, Tecton can store feature values ​​in AI system training and deployment environments. The platform can also monitor data pipelines, calculate latency and processing costs, and retrieve historical attributes to train systems into production.

Tecton also hosts an open source feature store platform, Feast, which requires no dedicated infrastructure. Feast instead reuses existing cloud or on-premises hardware and deploys new resources as needed.

“Typical use cases for Tecton are machine learning applications that take advantage of real-time inference. Some examples include fraud detection, recommendation systems, search, adoption, personalization and real-time pricing,” said Del Balso. “Many of these machine learning models perform much better at making predictions in real time, using real-time data. For example, fraud detection models are significantly more accurate when they use data about a user’s behavior from just seconds before, such as the number, size and geographic location of transactions.”

According to Cognilytica, the global market for MLOps platforms will be worth $4 billion by 2025 – up from $350 million in 2019. Tecton isn’t the only startup aspiring to it. Rivals include Comet, Weights & Biases, Iterative, InfuseAI, Arrikto, and Continual to name a few. In terms of feature store, Tecton competes with Rasgo and Molecula, as well as more established brands such as Splice, Google and AWS.

Del Balso points to a few points in Tecton’s favor, such as strategic partnerships and integrations with Databricks, Snowflake and Redis. Tecton has hundreds of active users — not a word about customers, other than the fact that its base has grown fivefold in the past year — and Del Balso said gross margins (net sales minus cost of goods sold) are above 80%. Annual recurring revenue has apparently tripled from 2021 to 2022, but Del Balso declined to provide hard numbers.

“We are still at the beginning of MLOps. This is a difficult transition for companies. Their teams of data scientists need to act more like data engineers and start building production-grade code. They need a slew of new tools to support this transition, and they need to integrate these tools into coherent machine learning platforms. The ecosystem of MLOps tools is still highly fragmented, making it more difficult for companies to build these machine learning platforms,” said Del Balso. “The pandemic accelerated the transition to digital experiences, and with it the importance of deploying operational ML to enable these experiences. We believe the pandemic was an accelerator for the adoption of new MLOps tools, including feature stores and feature platforms.”

San Francisco-based Tecton currently has 80 employees. The company plans to hire about 20 people over the next six months.

Leave a Reply

Your email address will not be published.