62 Neptune.ai Testimonials

Industry
Company Size
15 per page
  • 15
Reset
  • "We have all the metrics in our shared file storage as a backup, but we don’t really have a nice way to access them, to sort them, etc. We don’t have a setup for it because Neptune has been stable enough for us not to need it."

  • "We primarily use Neptune for training monitoring, particularly for loss tracking, which is crucial to decide whether to stop training if it’s not converging properly. It’s also invaluable for comparing experiments and presenting key insights through an intuitive dashboard to our managers and product owners."

  • "When I joined this company, we were doing quite many different experiments and it’s really hard to keep track of them all so I needed something to just view the result or sometimes or also it’s intermediate results of some experiments like what [does] the data frame look like? What [does] the CSV look like? Is it reasonable? Is there something that went wrong between the process that resulted in an undesirable result? So we were doing it manually first but just writing some log value to some log server like a Splunk."

  • "We use Neptune for keeping track of all our research work and monitoring of on-going model training. Since everything is tracked in Neptune it is super easy to keep track of what we did, how we did it, and what the results were. It makes it a lot easier also direct future research directions."

  • “I’m working with deep learning (music information processing), previously I was using Tensorboard to track losses and metrics in TensorFlow, but now I switched to PyTorch so I was looking for alternatives and I found Neptune a bit easier to use, I like the fact that I don’t need to (re)start my own server all the time and also the logging of GPU memory etc. is nice. So far I didn’t have the need to share the results with anyone, but I may in the future, so that will be nice as well.”

  • “Such a fast setup! Love it:”

  • "With Neptune, I have a mature observability layer to access and gain all the information. I can check any model’s performance very quickly. It would take me around a minute to figure out this information. I don’t have to go deeper and waste a lot of time. I have the results right in front of me. The time we have gained back played a significant part."

  • “I tested multiple loggers with pytorch-lightning integrations and found neptune to be the best fit for my needs. Friendly UI, ease of use and great documentatinon.“

  • "One of the biggest challenges [we had] was managing the pipelines and the process itself because we had 40 to 50 different pipelines. Depending on the exact use case or what kind of data we’d like to output, we could have different combinations for running them to get different outputs. So basically, the entire system isn’t so simple."

  • "I like the dashboards because we need several metrics, so you code the dashboard once, have those styles, and easily see them on one screen. Then, any other person can view the same thing, so that’s pretty nice."

  • “What we like about Neptune is that it easily hooks into multiple frameworks. Keeping track of machine learning experiments systematically over time and visualizing the output adds a lot of value for us.”

  • "No more DevOps needed for logging. No more starting VMs just to look at some old logs. No more moving data around to compare TensorBoards."

  • "I used Weights & Biases before Neptune. It’s impressive at the beginning, it works out of the box, and the UI is quite nice. But during the four years I used it, it didn’t improve —they didn’t fully develop the features they were working on. So I appreciate that Neptune has been noticably improved during the whole time I’ve been using it."

  • "Speed, accuracy and reliability are of the essence. That’s what we like about Neptune. Its lightweight SDK seamlessly integrates with our machine learning workflows, enabling us to effortlessly track artifacts and monitor model performance metrics and empowering our team to iterate rapidly, ensuring repeatable and reliable results."

  • “Indeed it was a game-changer for me, as you know AI training workloads are lengthy in nature, sometimes also prone to hanging in colab environment, and just to be able to launch a set of tests trying different hyperparameters with the assurance that the experiment will be correctly recorded in terms of results and hyper-parameters was big for me.”