"We are very integrated with AWS and want everything to happen inside of AWS, and when you are training on a large scale, you want multiple training jobs to happen at once, and that is where Neptune comes in."
"We tried MLflow. But the problem is that they have no user management features, which messes up a lot of things."
"With Neptune, I have a mature observability layer to access and gain all the information. I can check any model’s performance very quickly. It would take me around a minute to figure out this information. I don’t have to go deeper and waste a lot of time. I have the results right in front of me. The time we have gained back played a significant part."
"Neptune’s UI is highly configurable, which is way better than MLflow."
“Without information I have in the Monitoring section I wouldn’t know that my experiments are running 10 times slower as they could. All of my experiments are being trained on separate machines which I can access only via ssh. If I would need to download and check all of this separately I would be rather discouraged. When I want to share my results I’m simply sending a link.“
"Speed, accuracy and reliability are of the essence. That’s what we like about Neptune. Its lightweight SDK seamlessly integrates with our machine learning workflows, enabling us to effortlessly track artifacts and monitor model performance metrics and empowering our team to iterate rapidly, ensuring repeatable and reliable results."
“I had been thinking about systems to track model metadata and it occurred to me I should look for existing solutions before building anything myself. Neptune is definitely satisfying the need to standardize and simplify tracking of experimentation and associated metadata. My favorite feature so far is probably the live tracking of performance metrics, which is helpful to understand and troubleshoot model learning. I also find the web interface to be lightweight, flexible, and intuitive.”
"Building something like a power line is a huge project, so you have to get the design right before you start. The more reasonable designs you see, the better decision you can make. Optioneer can get you design assets in minutes at a fraction of the cost of traditional design methods."
“I didn’t expect this level of support.”
“Indeed it was a game-changer for me, as you know AI training workloads are lengthy in nature, sometimes also prone to hanging in colab environment, and just to be able to launch a set of tests trying different hyperparameters with the assurance that the experiment will be correctly recorded in terms of results and hyper-parameters was big for me.”
"Clearly, handling the training of more than 7000 separate machine learning models without any specialized tool is practically impossible. We definitely needed a framework able to group and manage the experiments."
"MLflow requires what I like to call software kung fu, because you need to host it yourself. So you have to manage the entire infrastructure — sometimes it’s good, oftentimes it’s not."
"Neptune made sense to us due to its pay-per-use or usage-based pricing. Now when we are doing active experiments then we can scale up and when we’re busy integrating all our models for a few months that we scale down again."
"We use Neptune for most of our tracking tasks, from experiment tracking to uploading the artifacts. A very useful part of tracking was monitoring the metrics, now we could easily see and compare those F-scores and other metrics."
"We have a mantra: always be learning. We apply this primarily to our model, which means we’re always running experiments. So me, our CEO, other people in the team—we’re constantly checking the monitoring tool. It has to be nice, smooth, and be able to handle our training data streams consistently."