"I like the dashboards because we need several metrics, so you code the dashboard once, have those styles, and easily see them on one screen. Then, any other person can view the same thing, so that’s pretty nice."
"We’ve got a few teams across different countries and different time zones and prior to Neptune, we were just shipping each other zips of like TensorBoard logs, so being able to see it all in space and it’s all just logged to the central area is really great and has helped us compare our results a lot faster and a lot more efficiently."
“This thing is so much better than Tensorboard, love you guys for creating it."
“The problem with training models on remote clusters is that every time you want to see what is going on, you need to get your FTP client up, download the logs to a machine with a graphical interface, and plot it. I tried using TensorBoard but it was painful to set up in my situation. With Neptune, seeing training progress was as simple as hitting refresh. The feedback loop between changing the code and seeing whether anything changed is just so much shorter. Much more fun and I get to focus on what I want to do. I really wish that it existed 10 years ago when I was doing my PhD.”
“What we like about Neptune is that it easily hooks into multiple frameworks. Keeping track of machine learning experiments systematically over time and visualizing the output adds a lot of value for us.”
“Neptune allows us to keep all of our experiments organized in a single space. Being able to see my team’s work results any time I need makes it effortless to track progress and enables easier coordination.”
"I’ve used Neptune from 2019, first for my personal projects and now within the company. During this time, I saw changes and improvements in UI, but also performance and reliability. But at the same time, I always appreciated that it never became too cluttered with too many things. It’s straight to the point and it’s very effective in what it does."
"In the first month, we discussed what our ideal environment for machine learning (ML) development would look like, and experiment tracking was a key part of it."
"We are running our training jobs through SageMaker Pipelines, and to make it reproducible, we need to log each parameter when we launch the training job with SageMaker Pipeline. A useful feature here is the `NEPTUNE_CUSTOM_RUN_ID` environment variable."
"An important detail that we considered when we decided to choose Neptune is that we can invite everybody on Neptune, even non-technical people like product managers — there is no limitation on the users. This is great because, on AWS, you’d need to get an additional AWS account, and for other experiment tracking tools, you may need to acquire a per-user license."
"Weights and Biases went from being reasonably priced to being way too much. Especially since more than half the people we wanted to be able to see our models weren’t doing modeling. When we looked for an alternative, Neptune was the only one that could offer us everything we needed."
"When I joined this company, we were doing quite many different experiments and it’s really hard to keep track of them all so I needed something to just view the result or sometimes or also it’s intermediate results of some experiments like what [does] the data frame look like? What [does] the CSV look like? Is it reasonable? Is there something that went wrong between the process that resulted in an undesirable result? So we were doing it manually first but just writing some log value to some log server like a Splunk."
"Speed, accuracy and reliability are of the essence. That’s what we like about Neptune. Its lightweight SDK seamlessly integrates with our machine learning workflows, enabling us to effortlessly track artifacts and monitor model performance metrics and empowering our team to iterate rapidly, ensuring repeatable and reliable results."
“I am super messy with my experiments, but now I have everything organized for me automatically. I love it."
"Neptune and Optuna go hand in hand. You should start using Neptune as early as possible to save the trouble of having to go through multiple log statements to make sense of how your model did."