Seven Takeaways from the Industry’s Largest Machine Learning Observability Event

Arize:Observe, an annual summit focused on machine learning (ML) observability, wrapped up last week in front of an audience of over 1,000 technical leaders and practitioners. Now available to stream on demand, the event features multiple tracks and talks from Etsy, Kaggle, Opendoor, Spotify, Uber and more. Here are some highlights and quotes from some of the best sessions.

Scaling a machine learning platform is about the customer

In the “Scaling Your ML Practice” panel, Coinbase Chief Engineering Officer Chintan Turakhia put it bluntly: “a platform for one is not a platform.” His advice to teams looking to build from scratch: “Don’t talk about the ML platform first; talk about the problems you solve for your customers and how you can improve the business with ML… Doing ML for ML is great and there are whole worlds like Kaggle that are built for it, but solving a customer problem central “is all about doing the job internally,” he argues.

Machine learning infrastructure is more complex than software infrastructure

In the section “ML Platform Power Builders: Assemble!” panel, Smitha Shyam, director of engineering at Uber, makes an important distinction between machine learning infrastructure and software and data infrastructure.

“There is a misconception that ML is only about algorithms,” she says. “Machine learning is really about data, systems and the model. Thus, the infrastructure needed to support machine learning, from initial development to deployment and ongoing maintenance, is very large and complex. There are also dependencies on the underlying data layers of the system in which these models operate. For example, seemingly innocent changes in the data layer can completely change the output of the model. Unlike software engineering, the work of ML isn’t done when you’ve just tested your model and put it into production: the model’s predictions change as data changes, market conditions change, you have seasonality and your surrounding systems and business assumptions change. You need to consider all of these things when building the whole ML platform infrastructure. »

As a result, ML infrastructure is “a superset of what’s going on in your software infrastructure, your data infrastructure, and then the stuff that’s unique to the modeling itself,” she continues.

Diversity is a table stake

Shawn Ramirez, PhD, head of data science at Shelf Engine — where women hold 50% of all leadership positions — isn’t shy about pointing out the myriad benefits of diversity at his company. “I think the commitment to diversity and inclusion at Shelf Engine is important in many ways,” she says. “First, it affects the precision and bias of our data science models. Second, it changes our product development. And finally, it has an impact on the quality and retention of our team.

Tulsee Doshi, Head of Product – Responsible AI and Human-Centered Technology at Google, adds that it’s important not to overlook the global dimensions of diversity. “A lot of what we talk about in the press today is very Western-centric – we talk about modes of failure related to communities in the United States – but I think a lot of these concerns about the Fairness, systemic racism and bias, actually differ quite significantly when you go to different regions,” she says.

AI ethics goes far beyond compliance or Explainability

According to a wide range of speakers, having an AI ethics strategy in place is also essential for businesses. “Responsible AI is not an add-on to your data science practice, it’s not a luxury item to add to your operations, it’s something that needs to be there from day one,” notes Bahar Sateli, Senior Manager of AI and Analytics at PwC.

For Reid Blackman, founder and CEO of Virtue Consultants, it’s also something that starts at the top. “One of the reasons we don’t see as much AI ethics in practice as we should is the lack of senior leadership,” he says. Ultimately, AI ethics must be “woven through how you think about financial incentives for employees, how you think about roles and responsibilities,” he adds.

For many, new approaches to managing the ethical risks associated with AI are needed. “We can’t avoid the fact that models will make mistakes and we need to have the right guardrails and accountability for that,” notes Google’s Tulsee Doshi. “But we can also do a lot to anticipate possible errors if we are careful in the metrics that we develop and are really intentional about making sure that we cut those metrics in different ways, that we develop a diversity of metrics to measure different types of She cautions against overreliance on explainability or transparency in this process: “I don’t think either of these is in and of itself a solution to the ethical issues of the AI ​​of the same way a single metric is not a solution…these things work in concert. whole.”

The Data-Centric AI Revolution Drives the Need for End-to-End Observability

In the “Bracing Yourself For a World of Data-Centric AI” panel, Diego Oppenheimer, Executive Vice President of DataRobot, notes that the worlds of citizen data scientists and data science teams have some commonalities. “Operations are changing, but the part that’s consistent – and it’s interesting – is that the use cases are growing and you have more people involved in developing machine learning models and applying ML to use cases. usage, rigor around security, scale, governance and understanding what’s going on, and audibility and observability across the stack become even more important as you have proliferation — which… is only a bad thing if you don’t know what’s going on,” he notes.

Michael Del Balso, CEO and co-founder of Tecton, also notes the importance of understanding throughout the ML lifecycle. “Teams that build really high-quality ML apps” do well throughout the ML wheel, he explains. “It’s not just the learning phase, not just the decision phase – they’re also thinking about, for example, how does my data come back from my application into a training dataset? They play in all parts of this cycle and… make it much faster.

The machine learning infrastructure space is maturing

Many speakers marvel at how far the industry has come in such a short time. As Josh Baer, ​​Product Manager of the Machine Learning Platform at Spotify, points out: “When we started, there weren’t a lot of solutions that suited our needs that we had as an option. , so we had to create many of the basic components ourselves.

Anthony Goldbloom, CEO and Founder of Kaggle, agrees: “Some of the tools – including Arize – are really starting to mature in helping to deploy models and have confidence that they’re doing what they should be doing.”

🔮The Future: Multimodal Machine Learning

In the panel “Embedding Usage and Visualization In Modern ML Systems”, Leland McInnes, creator of UMAP and researcher at the Tutte Institute for Mathematics and Computing, explains what excites him for the future. On a more theoretical level, McInnes notes that “there is a lot of work on bundles and cell bundles, which is a very abstruse mathematical thing but turns out to be surprisingly useful” with “lots of neural network relationships graphics” that are beginning to appear in the literature.

On UMAP in particular, McInnes argues that the “grossly underutilized” parametric UMAP deserves more attention. He is also “very interested in how to align different UMAP models. There is an aligned UMAP that can align data you can explicitly define a relationship with one dataset to another, but what if I just start with two arbitrary datasets – say, vectors of French words and English word vectors and no dictionary? How do I produce a UMAP integration that aligns these so that I can integrate both? There are ways to do that,” he says, with “Grumov Wasserstein distance” as a key search term for those wanting to learn more. “People are going to align all these different multimodal datasets through these kinds of techniques,” he says.

Kaggle’s Goldbloom is also excited about this space. “Some of the possibilities around multimodal ML are an area of ​​excitement,” he says, particularly “multimodal training. Let’s say you’re trying to do speech recognition where you can hear what’s being said — and if you could include a lip-reading camera at the same time?”

Conclusion

With global business investment in AI systems expected to exceed $200 billion by 2023, this is an exciting time for the future of the industry. It’s also an important time for ML teams to learn best practices from their peers and make fundamental investments in ML platforms – including machine learning observability with ML performance tracking – to navigate a world where model issues directly impact business results.


Source link

Comments are closed.