62 Neptune.ai Testimonials

Industry
Company Size
15 per page
  • 15
Reset
  • "No more DevOps needed for logging. No more starting VMs just to look at some old logs. No more moving data around to compare TensorBoards."

  • "Clearly, handling the training of more than 7000 separate machine learning models without any specialized tool is practically impossible. We definitely needed a framework able to group and manage the experiments."

  • "Weights and Biases went from being reasonably priced to being way too much. Especially since more than half the people we wanted to be able to see our models weren’t doing modeling. When we looked for an alternative, Neptune was the only one that could offer us everything we needed."

  • “Neptune was easy to set up and integrate into my experimental flow. The tracking and logging options are exactly what I needed and the documentation was up to date and well written.”

  • “The problem with training models on remote clusters is that every time you want to see what is going on, you need to get your FTP client up, download the logs to a machine with a graphical interface, and plot it. I tried using TensorBoard but it was painful to set up in my situation. With Neptune, seeing training progress was as simple as hitting refresh. The feedback loop between changing the code and seeing whether anything changed is just so much shorter. Much more fun and I get to focus on what I want to do. I really wish that it existed 10 years ago when I was doing my PhD.”

  • "Neptune made sense to us due to its pay-per-use or usage-based pricing. Now when we are doing active experiments then we can scale up and when we’re busy integrating all our models for a few months that we scale down again."

  • "We use Neptune for most of our tracking tasks, from experiment tracking to uploading the artifacts. A very useful part of tracking was monitoring the metrics, now we could easily see and compare those F-scores and other metrics."

  • "We primarily use Neptune for training monitoring, particularly for loss tracking, which is crucial to decide whether to stop training if it’s not converging properly. It’s also invaluable for comparing experiments and presenting key insights through an intuitive dashboard to our managers and product owners."

  • “I had been thinking about systems to track model metadata and it occurred to me I should look for existing solutions before building anything myself. Neptune is definitely satisfying the need to standardize and simplify tracking of experimentation and associated metadata. My favorite feature so far is probably the live tracking of performance metrics, which is helpful to understand and troubleshoot model learning. I also find the web interface to be lightweight, flexible, and intuitive.”

  • "Neptune and Optuna go hand in hand. You should start using Neptune as early as possible to save the trouble of having to go through multiple log statements to make sense of how your model did."

  • "We are very integrated with AWS and want everything to happen inside of AWS, and when you are training on a large scale, you want multiple training jobs to happen at once, and that is where Neptune comes in."

  • "So I would say the main argument for using Neptune is that you can be sure that nothing gets lost, everything is transparent, and I can always go back in history and compare."

  • "Neptune works flawlessly, and integrating it with PyTorch Lightning was very smooth."

  • “The last few hours have been my first w/ Neptune and I’m really appreciative of how much time it’s saved me not having to fiddle w/ matplotlib in addition to everything else.“

  • “Neptune is making it easy to share results with my teammates. I’m sending them a link and telling what to look at, or I’m building a View on the experiments dashboard. I don’t need to generate it by myself, and everyone in my team have access to it.”