Tech ONTAP Blogs

AI-Powered Storage Management Moves from Console to VS Code

SagarGupta
NetApp
26 Views
Author: Cloud Storage Product Team - Nitya Gupta, Prabu Arjunan, Sagar Gupta, Chahat Gill

 

In Google Cloud environments, managing storage for ML workloads shouldn't pull engineers away from their notebooks and pipelines. Yet many data science teams find themselves bouncing between the Google Cloud Console, infrastructure repos, and Jupyter—constantly context-switching just to understand and optimize storage. 

 

That friction adds up. A routine volume review or capacity check can silently consume most of a day, while data scientists wait for storage to be ready. 

 

The Google Cloud NetApp Volumes AI-Powered VS Code Extension changes this dynamic. It brings intelligent storage insights and management directly into VS Code, letting ML DevOps engineers analyze, provision, and optimize storage without leaving their development workspace. 

 

This is how it works for Priya, a ML DevOps Engineer. 

 

Priya's Reality: Waiting on Storage, Losing Focus 

 

Priya's team runs dozens of ML experiments per month on Google Cloud. At the start of a sprint, she gets a request: 

 

"We're scaling training for three new models. Can you set up storage for data ingestion, training datasets, and checkpoints? Also audit our existing volumes for cost optimization." 

 

It sounds operational—but it's a full-day detour from pipeline work. Here's why: 

  • Resource Discovery: The day starts with manual navigation of the Google Cloud Console. Priya browses NetApp Volumes and storage pools, comparing configurations across multiple tabs and tracking data in spreadsheets—all while managing urgent requests from the data science team. 
  • Configuration & Compliance: Next, she audits performance tiers, capacity, and Vertex AI integrations for each volume. This involves cross-referencing against internal standards to catch over-provisioning or compliance gaps, a manual process prone to oversight. 
  • Optimization & Handover: Finally, she analyzes utilization for cost efficiencies and documents the setup. By the time she confirms availability to the team, an entire day has been consumed by administrative tasks rather than strategic pipeline optimization. 

The work was necessary—but it felt like a detour, not forward to progress. 

 

What Changes with AI in VS Code 

 

Now imagine Priya's team adopts the Google Cloud NetApp Volumes AI-Powered VS Code Extension. 

 

Instead of starting with the Google Cloud Console, Priya opens VS Code, where she already keeps her ML infrastructure code, notebooks, and runbooks. 

 

Step 1 (5 min): Priya installs the extension and authenticates it with her Google Cloud credentials. A new NetApp Volumes view appears in the sidebar, showing a tree of storage pools, volumes, and snapshots across her Google Cloud projects—instantly visible without console navigation. 

 

Step 2 (5 min): She asks the AI assistant: @gcnv analyze this volume. Within seconds, the extension returns: 

  • Performance optimization opportunities (is this volume over-provisioned?) 
  • Cost efficiency recommendations (which tiers could save money?) 
  • Data redundancy and disaster recovery insights 

Instant intelligence, no manual analysis is required. 

 

Step 3 (10 min): She clicks through the resource tree in VS Code to inspect detailed configuration for each volume: capacity, tier, performance characteristics. All in one place instead of scattered console tabs. 

 

Step 4 (5 min): For new storage needs, she can ask: @gcnv what is this volume to understand existing patterns, or document optimization plans directly in infrastructure-as-code format within VS Code. 

The same task that previously consumed most of a day now fits into one focused 30-minute session. Data scientists get their storage faster, Priya gets back to pipeline work. 

 

What This Brings to Developer Teams 

 

Aspect 

Before 

After 

Volume inspection & analysis 

~8 hours across tabs 

~30 minutes in VS Code 

Context switches 

10+ console tabs 

Single editor 

Cost optimization 

Manual, time-consuming 

AI recommendations in real-time 

Experiment delays 

Waiting hours for storage 

Setup ready in 30 minutes 

Configuration accuracy 

Manual, error-prone 

Automated, version-controlled 

 

For a team of 8 engineers supporting 50+ data scientists, this reclaims 24+ hours per month—time redirected toward pipeline optimization, cost management, and supporting more concurrent experiments. 

 

Bonus: Data scientists wait 4–6 fewer hours for storage, accelerating experiment cycles. For teams running 100+ experiments monthly, this compounds into weeks of faster time-to-model. 

 

Key Capabilities 

 

  • @gcnv analyze this volume: AI-powered cost and performance insights  
  • @gcnv what is this volume: Comprehensive volume details without console navigation  
  • Resource Browser: Tree view of pools, volumes, snapshots, and clones  
  • Performance Metrics: Capacity, utilization, and tier visibility  
  • Cost Optimization: AI recommendations for right-sizing and tier selection 

Learn More: 

  • Reach out: 1P_ProductGrowth@netapp.com 
Public