"Clearly, handling the training of more than 7000 separate machine learning models without any specialized tool is practically impossible. We definitely needed a framework able to group and manage the experiments."
"Our ML teams at Waabi continuously run large-scale experiments with ML models. A significant challenge we faced was keeping track of the data they collected from experiments and exporting it in an organized and shareable way."
"We tried MLflow. But the problem is that they have no user management features, which messes up a lot of things."
"No more DevOps needed for logging. No more starting VMs just to look at some old logs. No more moving data around to compare TensorBoards."
"I used Weights & Biases before Neptune. It’s impressive at the beginning, it works out of the box, and the UI is quite nice. But during the four years I used it, it didn’t improve —they didn’t fully develop the features they were working on. So I appreciate that Neptune has been noticably improved during the whole time I’ve been using it."
“What we like about Neptune is that it easily hooks into multiple frameworks. Keeping track of machine learning experiments systematically over time and visualizing the output adds a lot of value for us.”
“Neptune is making it easy to share results with my teammates. I’m sending them a link and telling what to look at, or I’m building a View on the experiments dashboard. I don’t need to generate it by myself, and everyone in my team have access to it.”
"We initially aimed for a GKE deployment for our experiment tracking tool. However, the other solution we explored had a rigid installation process and limited support, making it unsuitable for our needs. Thankfully, Neptune’s on-premise installation offered the flexibility and adjustability we required. The process was well-prepared, and their engineers were incredibly helpful, answering all our questions and even guiding us through a simpler deployment approach. Neptune’s on-prem solution and supportive team saved the day, making it a win for us."
"We use PyTorch Lightning, and it was just a matter of changing the tracker from Weights and Biases to Neptune. It’s like two lines of code. It’s actually quite easy."
"Self-hosted deployment for ML solutions will become more and more important. People don't feel comfortable with valuable intellectual property being stored in 3rd party DBs. For us, such deployment was too difficult and time-consuming in the previous solution. We could achieve that with Neptune, and it allowed us to close important deals that had stringent security requirements."
"As our company has grown from a startup to a sizeable organization of 200 people, robust security and effective user management have become increasingly evident and vital."
“Indeed it was a game-changer for me, as you know AI training workloads are lengthy in nature, sometimes also prone to hanging in colab environment, and just to be able to launch a set of tests trying different hyperparameters with the assurance that the experiment will be correctly recorded in terms of results and hyper-parameters was big for me.”
"We evaluated several commercial and open-source solutions. We looked at the features for tracking experiments, the ability to share, the quality of the documentation, and the willingness to add new features. Neptune was the best choice for our use cases."
"In the first month, we discussed what our ideal environment for machine learning (ML) development would look like, and experiment tracking was a key part of it."
"We are running our training jobs through SageMaker Pipelines, and to make it reproducible, we need to log each parameter when we launch the training job with SageMaker Pipeline. A useful feature here is the `NEPTUNE_CUSTOM_RUN_ID` environment variable."