62 Neptune.ai Testimonials

Industry
Company Size
15 per page
  • 15
Reset
  • "We use Neptune for most of our tracking tasks, from experiment tracking to uploading the artifacts. A very useful part of tracking was monitoring the metrics, now we could easily see and compare those F-scores and other metrics."

  • "I really appreciate that I’ve never seen any outage in Neptune. And since we’re training an LLM, that it’s super critical to not have any outages in our loss curve. Other than that, there are things you often take for granted in a product: reliability, flexibility, quality of support. Neptune nails those and gives us the confidence."

  • "We have a mantra: always be learning. We apply this primarily to our model, which means we’re always running experiments. So me, our CEO, other people in the team—we’re constantly checking the monitoring tool. It has to be nice, smooth, and be able to handle our training data streams consistently."

  • "With Neptune, I have a mature observability layer to access and gain all the information. I can check any model’s performance very quickly. It would take me around a minute to figure out this information. I don’t have to go deeper and waste a lot of time. I have the results right in front of me. The time we have gained back played a significant part."

  • "No more DevOps needed for logging. No more starting VMs just to look at some old logs. No more moving data around to compare TensorBoards."

  • "I used Weights & Biases before Neptune. It’s impressive at the beginning, it works out of the box, and the UI is quite nice. But during the four years I used it, it didn’t improve —they didn’t fully develop the features they were working on. So I appreciate that Neptune has been noticably improved during the whole time I’ve been using it."

  • "My productivity in collaborating with students and also my own research speed increased dramatically. I wouldn’t know how to do my work without Neptune."

  • "We’ve got a few teams across different countries and different time zones and prior to Neptune, we were just shipping each other zips of like TensorBoard logs, so being able to see it all in space and it’s all just logged to the central area is really great and has helped us compare our results a lot faster and a lot more efficiently."

  • “This thing is so much better than Tensorboard, love you guys for creating it."

  • "Speed, accuracy and reliability are of the essence. That’s what we like about Neptune. Its lightweight SDK seamlessly integrates with our machine learning workflows, enabling us to effortlessly track artifacts and monitor model performance metrics and empowering our team to iterate rapidly, ensuring repeatable and reliable results."

  • "The killer feature in Neptune is custom dashboards. Without this, I wouldn’t be able to communicate my simulations to Developers, the Analytics team, and business stakeholders without any hassle. Neptune gives our Data Scientists the piece of mind that their best results won’t be lost and that communication will be a breeze."

  • “Neptune is making it easy to share results with my teammates. I’m sending them a link and telling what to look at, or I’m building a View on the experiments dashboard. I don’t need to generate it by myself, and everyone in my team have access to it.”

  • "Neptune’s UI is highly configurable, which is way better than MLflow."

  • “Neptune provides an accessible and intuitive way to visualize, analyze and share metrics of our projects. We can not only discuss it with other team members, but also with management, in a way that can be easily interpreted by someone not familiar with the implementation details. Tracking and comparing different approaches has notably boosted our productivity, allowing us to focus more on the experiments, develop new, good practices within our team and make better data-driven decisions. We love the fact that the integration is effortless. No matter what framework we use – it just works in the matter of minutes, allowing us to automate and unify our processes.”

  • “I used to keep track of my models with folders on my machine and use naming conventions to save the parameters and model architecture. Whenever I wanted to track something new about the model, I would have to update the naming structure. It was painful. There was a lot of manual work involved. Now everything happens automatically. I can compare models in the online interface that looks great. It saves me a lot of time, and I can focus on my research instead of keeping track of everything manually.“