-
“I used to keep track of my models with folders on my machine and use naming conventions to save the parameters and model architecture. Whenever I wanted to track something new about the model, I would have to update the naming structure. It was painful. There was a lot of …
-
“This thing is so much better than Tensorboard, love you guys for creating it."
-
"Our ML teams at Waabi continuously run large-scale experiments with ML models. A significant challenge we faced was keeping track of the data they collected from experiments and exporting it in an organized and shareable way."
-
“Indeed it was a game-changer for me, as you know AI training workloads are lengthy in nature, sometimes also prone to hanging in colab environment, and just to be able to launch a set of tests trying different hyperparameters with the assurance that the experiment will be correctly recorded in …
-
"At some point, one of my students tried doing the tracking process manually, and he was very frustrated after one project. Any manual change can mess up information organization and how you track it. And if you do not build it well, then you suffer, you need to recode, etc. …
-
“I’m working with deep learning (music information processing), previously I was using Tensorboard to track losses and metrics in TensorFlow, but now I switched to PyTorch so I was looking for alternatives and I found Neptune a bit easier to use, I like the fact that I don’t need to …
-
“Such a fast setup! Love it:”
-
"Weights and Biases went from being reasonably priced to being way too much. Especially since more than half the people we wanted to be able to see our models weren’t doing modeling. When we looked for an alternative, Neptune was the only one that could offer us everything we needed."
-
"Neptune and Optuna go hand in hand. You should start using Neptune as early as possible to save the trouble of having to go through multiple log statements to make sense of how your model did."
-
"We are very integrated with AWS and want everything to happen inside of AWS, and when you are training on a large scale, you want multiple training jobs to happen at once, and that is where Neptune comes in."
-
"Clearly, handling the training of more than 7000 separate machine learning models without any specialized tool is practically impossible. We definitely needed a framework able to group and manage the experiments."
-
"Speed, accuracy and reliability are of the essence. That’s what we like about Neptune. Its lightweight SDK seamlessly integrates with our machine learning workflows, enabling us to effortlessly track artifacts and monitor model performance metrics and empowering our team to iterate rapidly, ensuring repeatable and reliable results."
-
"I’ve used Neptune from 2019, first for my personal projects and now within the company. During this time, I saw changes and improvements in UI, but also performance and reliability. But at the same time, I always appreciated that it never became too cluttered with too many things. It’s straight …
-
“What we like about Neptune is that it easily hooks into multiple frameworks. Keeping track of machine learning experiments systematically over time and visualizing the output adds a lot of value for us.”
-
"One of the biggest challenges [we had] was managing the pipelines and the process itself because we had 40 to 50 different pipelines. Depending on the exact use case or what kind of data we’d like to output, we could have different combinations for running them to get different outputs. …