"MLflow requires what I like to call software kung fu, because you need to host it yourself. So you have to manage the entire infrastructure — sometimes it’s good, oftentimes it’s not."
"Neptune works flawlessly, and integrating it with PyTorch Lightning was very smooth."
"The killer feature in Neptune is custom dashboards. Without this, I wouldn’t be able to communicate my simulations to Developers, the Analytics team, and business stakeholders without any hassle. Neptune gives our Data Scientists the piece of mind that their best results won’t be lost and that communication will be a breeze."
“Within the first few tens of runs, I realized how complete the tracking was – not just one or two numbers, but also the exact state of the code, the best-quality model snapshot stored to the cloud, the ability to quickly add notes on a particular experiment. My old methods were such a mess by comparison.”
“Neptune is making it easy to share results with my teammates. I’m sending them a link and telling what to look at, or I’m building a View on the experiments dashboard. I don’t need to generate it by myself, and everyone in my team have access to it.”
"Neptune made sense to us due to its pay-per-use or usage-based pricing. Now when we are doing active experiments then we can scale up and when we’re busy integrating all our models for a few months that we scale down again."
"We use Neptune for most of our tracking tasks, from experiment tracking to uploading the artifacts. A very useful part of tracking was monitoring the metrics, now we could easily see and compare those F-scores and other metrics."
"Our ML teams at Waabi continuously run large-scale experiments with ML models. A significant challenge we faced was keeping track of the data they collected from experiments and exporting it in an organized and shareable way."
"We evaluated several commercial and open-source solutions. We looked at the features for tracking experiments, the ability to share, the quality of the documentation, and the willingness to add new features. Neptune was the best choice for our use cases."
"In the first month, we discussed what our ideal environment for machine learning (ML) development would look like, and experiment tracking was a key part of it."
"We are running our training jobs through SageMaker Pipelines, and to make it reproducible, we need to log each parameter when we launch the training job with SageMaker Pipeline. A useful feature here is the `NEPTUNE_CUSTOM_RUN_ID` environment variable."
"An important detail that we considered when we decided to choose Neptune is that we can invite everybody on Neptune, even non-technical people like product managers — there is no limitation on the users. This is great because, on AWS, you’d need to get an additional AWS account, and for other experiment tracking tools, you may need to acquire a per-user license."
"We initially aimed for a GKE deployment for our experiment tracking tool. However, the other solution we explored had a rigid installation process and limited support, making it unsuitable for our needs. Thankfully, Neptune’s on-premise installation offered the flexibility and adjustability we required. The process was well-prepared, and their engineers were incredibly helpful, answering all our questions and even guiding us through a simpler deployment approach. Neptune’s on-prem solution and supportive team saved the day, making it a win for us."
"We use PyTorch Lightning, and it was just a matter of changing the tracker from Weights and Biases to Neptune. It’s like two lines of code. It’s actually quite easy."
"Self-hosted deployment for ML solutions will become more and more important. People don't feel comfortable with valuable intellectual property being stored in 3rd party DBs. For us, such deployment was too difficult and time-consuming in the previous solution. We could achieve that with Neptune, and it allowed us to close important deals that had stringent security requirements."