Setup DR Test SVM (testing-nasprd)

Before network is broken, after domain controllers, dns and beyond trust are functioning.

Scott Linden - June 21, 2022

Update - June 16, 2023

Pause SVM Replication (DR NetApp)

Snapmirror Quiesce (pause) verification.

Stop the source vserver.

Verify vserver has been stopped.

Remove lif from source SVM.

Verify vserver lif has been removed.

Create the Testing SVM

Show vserver information

Verify enabled protocols

Add or remove protocols if necessary.

List snapshots of source SVM.

Create clones of the replication SVM volumes and assign them to the testing SVM. Choose latest snapshot from previous command.

Mount the volumes.

List the volumes for the SVM.

Create an NFS export policy.

View export policy rules.

Create the NFS server.

View the status of the NFS server.

View the NFS server properties.

Create the Network LIF (Logical Interface).

View the Network LIF.

Create the default route.

Show the default route.

Setup DNS servers on the SVM.

Show the SVM DNS properties.

Create the SVM CIFS server.

Show the SVM CIFS server properties.

Create the UNIX to Windows and Windows to UNIX name mapping rules.

Show the SVM User Name mapping rules.

Create a CIFS share.

Show CIFS share properties.

Create UNIX groups for NFS.

View the list of UNIX groups.

Create UNIX users for NFS.

View the list of UNIX users.

This browser is no longer supported.

Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.

  • Contact Sales
  • Try Azure for free

General Availability: Azure NetApp Files backup

Published date: may 29, 2024.

Azure NetApp Files online snapshots are enhanced with backup of snapshots. With this backup capability, you can offload (vault) your Azure NetApp Files snapshots to a Backup Vault in a fast and cost-effective way, further protecting your data from accidental deletion. 

Backup further extends Azure NetApp Files’ built-in snapshot technology; when snapshots are vaulted to a Backup Vault only changed data blocks relative to previously vaulted snapshots are copied and stored, in an efficient format. Vaulted snapshots however are still represented in full and can be restored to a new volume individually and directly, eliminating the need for an iterative full-incremental recovery process.  

This feature is now generally available in all supported regions . 

Learn more: Understand Azure NetApp Files backup | Microsoft Docs  

  • Azure NetApp Files
  • Pricing & Offerings

Related Products

  • -- Categories -- M&As Start-Ups Systems (RAID, NAS, SAN) Financial Results Market Reports/Research Solid State (SSD, flash key, DNA, etc.) Hard Disk Drives Tapes Optical Software People Cloud, Online Backup, SSPs, MSPs Connectivity (switch/HBA/interface) Security OEM/Channel/Distribution Customer Wins Business (others) Exhibitions History Miscellaneous
  • -- More resources -- 64.2ZB of Data Created or Replicated in 2020 Complete List of 233 SSD Makers All Financial Rounds of Funding of Storage Start-Ups Since 2000 All M&As in 2023 Calendar of WW Storage Events All Companies in All-Flash Subsystems Storage Dictionary (SNIA) History of Storage Industry Storage Abbreviations Top Fastest Growing Storage Companies in 2022 Top Storage Companies in 2023 What Do Represent All These Bytes?
  • Submit News
  • Editorial Policy
  • Advertise with Us

Infinidat

  • Search for:
  • 64.2ZB of Data Created or Replicated in 2020
  • Complete List of 233 SSD Makers
  • All Financial Rounds of Funding of Storage Start-Ups Since 2000
  • All M&As in 2023
  • Calendar of WW Storage Events
  • All Companies in All-Flash Subsystems
  • Storage Dictionary (SNIA)
  • History of Storage Industry
  • Storage Abbreviations
  • Top Fastest Growing Storage Companies in 2022
  • Top Storage Companies in 2023
  • What Do Represent All These Bytes?

Articles_top

Home » Systems (RAID, NAS, SAN) » NetApp AFF A1K, AFF A90, and AFF A70 Unified Storage Systems Built for AI Era

NetApp AFF A1K, AFF A90, and AFF A70 Unified Storage Systems Built for AI Era

Up to 2x better performance with 40 million io/s, 1tb/s throughput, and integrated real-time ransomware detection designed for 99%+ accuracy and ransomware recovery guarantee.

NetApp, Inc . announced  AFF A-Series systems that can power the most demanding IT workloads customers face, including GenAI, VMware, and enterprise databases.

Netapp Aff Intro

In the AI era, organizations are feeling pressure to accelerate innovation, unlock new customer experiences, outsmart cyber threats, and gain ever greater productivity. Many organizations see AI as a critical tool to help them achieve those goals. According to the 2024 NetApp Cloud Complexity report , organizations realize that achieving business success with AI hinges on two critical factors, data (74%) and IT infrastructure (71%). With today’s announcements, the company is helping organizations excel at both factors and drive competitive success by offering innovative intelligent data infrastructure that empowers customers to unlock the value of their data with AI.

The new AFF A-Series systems continue the firm’s leadership in unified data storage for the next-gen of workloads. Leveraging the same technology relied upon by the top 3 public clouds, the AFF A-Series eliminates storage silos and storage complexity, providing powerful, intelligent, and secure storage to accelerate and optimize every workload. This includes integrated capabilities to optimize VMware storage costs today and provide unmatched flexibility for the future.

“ Data is undeniably the most valuable asset for any company to outpace its competitors. Whether it’s mission critical applications or leveraging enterprise data to fuel AI, the data infrastructure a company chooses to run it on makes all the difference, ” said Sandeep Singh, SVP and GM, enterprise storage. “ NetApp’s extensive, unified data storage portfolio, from on-premises to the public clouds, makes it the go-to solution for enterprises looking to have the robustness for the most demanding workloads. The introduction of the new AFF A-Series Systems is a testament to our unwavering commitment to delivering the most powerful, intelligent, and secure enterprise storage in the industry. ”

AFF A-Series systems to accelerate business technology operations

Netapp New Aff Appliances2405b

(1) Up to 40 million IO/s and up to 1TB/s throughput in a single cluster (2) Check out our certifications (3) Terms and conditions will apply.  (4) Effective capacity based on 5:1 storage efficiency ratios with the maximum number of SSDs installed; space savings vary depending on workload and use cases. (5) Estimate under typical customer condition – awaiting field data for the new product.

With the introduction of AFF A-Series all-flash storage systems, the company is continuing its commitment to innovation with unified storage systems designed for any data, any app, and any cloud. The AFF A-Series storage systems power the most demanding workloads – from existing mission-critical apps to GenAI workloads that will drive success into the future.

These systems are AFF A1K, AFF A90, and AFF A70, which can turbo-charge enterprise workloads by delivering:

  • Up to 2x better performance with unmatched 40 million IO/s, 1TB/s throughput
  • Proven 99.9999% data availability
  • Leading raw-to-effective capacity, including always-on data reduction and 4:1 Storage Efficiency Guarantee
  • Integrated real-time ransomware detection designed for 99%+ accuracy and Ransomware Recovery Guarantee

The firm ‘s unified storage supports block, file and object storage protocols and natively integrates with the 3 largest public cloud providers, allowing customers to consolidate workloads, lower cost of data, and operate without silos. Powered by ONTAP , these systems deliver the simplicity and reliability 10,000s of organizations have come to expect from NetApp.

“ As we’ve ramped up our investments in AI projects to help accelerate our business, we needed to grow our data infrastructure to deliver ever greater performance for those workloads ,” Christian Klie, tribe cluster lead, T-Systems . “ We rely on intelligent data infrastructure delivered by NetApp to power our most critical workloads, and the increased power of the new AFF A-Series systems, paired with their integrated anti-ransomware features and hybrid cloud capabilities, will help position us for success now and in the future .”

“ AI is creating the biggest business transformation opportunity we’ve seen in decades, allowing enterprises to unlock new sources of value from their data, ” said Justin Hotard, E VP and GM , data center and AI g roup, Intel Corp . “NetApp AFF A-Series systems utilizing Intel Xeon processors provide the performance and features to help businesses accelerate their enterprise AI adoption .”

Providing powerful, intelligent and secure enterprise data infrastructure To continue its innovation as the intelligent data infrastructure company, the company released additional capabilities to provide customers with the advanced data management, ransomware protection, and cloud integration that modern workloads like GenAI demand.

Features and capabilities in NetApp’s data management and integrated services include:

Netapp Storagegrid Intro

New StorageGRID m odels : The firm has introduced 6 new StorageGRID models that enhance the value of large, unstructured data while reducing total cost of ownership. StorageGRID can leverage capacity flash to provide fast object access times at the lowest cost . Customers can experience a new level of flexibility, choice, performance, and sustainability for critical object workloads with new models that offer a competitive price/GB, up to a 3x performance increase, 80% footprint reduction, and power consumption savings as high as 70%.

Cyber Vault reference architecture : The company announced a new cyber vault reference architecture that extends the firm’s data protection capabilities. Combining the latest advances in secure data storage, autonomous real-time ransomware detection, and rapid data restoration, NetApp’s secure and resilient cyber vault delivers ‘logically air-gapped’ storage based on ONTAP technology, for unparalleled protection of customer data against advanced cyber-threats.

SnapMirror Active Sync : The latest version of ONTAP includes SnapMirror active sync which creates a symmetric active-active business continuity solution across two data centers. Coupled with VMware vSphere Metro Storage Cluster (vMSC) and enterprise databases from Oracle, SAP, and Microsoft, SnapMirror active sync enables ongoing business operations with no disruption during a data center outage.

FlexCache with Writeback: The updated version of ONTAP also includes FlexCache with Writeback which creates local copies of data for distributed teams, resulting in reduced latency and uninterrupted access while reducing administrative overhead. The local copies can RW data, granting local teams greater control while maintaining data consistency with the core data center.

NetApp AIPod with Lenovo : NetApp and Lenovo are collaborating on a new converged infrastructure solution designed for retrieval-augmented generation (RAG) and inferencing use cases for GenAI, with Lenovo ThinkSystem servers utilizing Nvidia L40S GPUs , Nvidia Spectrum-X networking, and NetApp AFF storage, all validated with the Nvidia OVX architecture specification.

BlueXP c lassification : This AI/ML-driven service is now available as a BlueXP core capability at no additional charge, giving users immediate access to the ability to automatically classify, categorize, and tag data across the entire data estate to deepen data intelligence, enhancing efforts in governance, security, and compliance while enabling strategic workloads such as GenAI. With BlueXP classification, customers can now fuel GenAI and RAG innovation through the AIOs ability to securely and programmatically augment pre-trained models with auto-classified, proprietary data on demand for enhanced relevancy without sacrificing cost or data security.

“ AI is a massive opportunity for companies to leverage their data in new ways to unlock competitive advantages, ” Ashish Nadkarni, group VP and GM, infrastructure systems, platforms and technologies and BuyerView Research, IDC. “ However, as the AI market develops, how organizations approach AI may change. They need storage infrastructure that gives them the flexibility to combine their on-premises data storage with cloud environments. NetApp’s strategy of delivering powerful unified data storage that works with any data protocol, in any environment, to run any workload gives its customers the power and flexibility they need to face whatever challenges come their way .”

Resources: NetApp AFF A-Series: High- Performing Unified Storage Unified Data Storage for the AI Era     Embrace the AI Era with the New NetApp AFF A-Series Storage Systems     The Leader in Object Storage Just Keeps Getting Better with New StorageGRID Systems and Capacity Flash     ONTAP: Data Management Software for a Better Hybrid Cloud Experience      StorageGRID: Smart, Fast, Future-Proof Object Storage

Articles_bottom

  • Systems (RAID, NAS, SAN)
  • Financial Results
  • Market Reports/Research
  • Solid State (SSD, flash key, DNA, etc.)
  • Hard Disk Drives
  • Cloud, Online Backup, SSPs, MSPs
  • Connectivity (switch/HBA/interface)
  • OEM/Channel/Distribution
  • Customer Wins
  • Business (others)
  • Exhibitions
  • Miscellaneous

Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications You must be signed in to change notification settings

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get-Volume does not work in container : No MSFT_Volume objects found with property 'DriveLetter' equal.... #476

@ntrappe-msft

david-garcia-garcia commented Mar 21, 2024

@david-garcia-garcia

ntrappe-msft commented Apr 2, 2024

  • 👍 2 reactions

Sorry, something went wrong.

david-garcia-garcia commented Apr 9, 2024 • edited

No branches or pull requests

@david-garcia-garcia

  • Datadog Site
  • Serverless for AWS Lambda
  • Autodiscovery
  • Datadog Operator
  • Assigning Tags
  • Unified Service Tagging
  • Service Catalog
  • Session Replay
  • Continuous Testing
  • Browser Tests
  • Private Locations
  • Incident Management
  • Database Monitoring
  • Cloud Security Management
  • Software Composition Analysis
  • Workflow Automation
  • CI Visibility
  • Test Visibility
  • Intelligent Test Runner
  • Learning Center
  • Standard Attributes
  • Amazon Linux
  • Oracle Linux
  • Rocky Linux
  • From Source
  • Architecture
  • Supported Platforms
  • Advanced Configurations
  • Configuration Files
  • Status Page
  • Network Traffic
  • Proxy Configuration
  • FIPS Compliance
  • Dual Shipping
  • Secrets Management
  • Remote Configuration
  • Fleet Automation
  • Upgrade to Agent v7
  • Upgrade to Agent v6
  • Upgrade Between Agent Minor Versions
  • Container Hostname Detection
  • Agent Flare
  • Agent Check Status
  • Permission Issues
  • Integrations Issues
  • Site Issues
  • Autodiscovery Issues
  • Windows Container Issues
  • Agent Runtime Configuration
  • High CPU or Memory Consumption
  • Data Security
  • Getting Started
  • OTLP Metrics Types
  • Configuration
  • Integrations
  • Resource Attribute Mapping
  • Metrics Mapping
  • Infrastructure Host Mapping
  • Hostname Mapping
  • Service-entry Spans Mapping
  • Ingestion Sampling
  • OTLP Ingestion by the Agent
  • W3C Trace Context Propagation
  • OpenTelemetry API Support
  • Correlate RUM and Traces
  • Correlate Logs and Traces
  • Troubleshooting
  • Visualizing OTLP Histograms as Heatmaps
  • Migrate to OpenTelemetry Collector version 0.95.0+
  • Producing Delta Temporality Metrics
  • Sending Data from OpenTelemetry Demo
  • OAuth2 in Datadog
  • Authorization Endpoints
  • Datagram Format
  • Unix Domain Socket
  • High Throughput Data
  • Data Aggregation
  • DogStatsD Mapper
  • Writing a Custom Agent Check
  • Writing a Custom OpenMetrics Check
  • Create an Agent-based Integration
  • Create an API Integration
  • Create a Log Pipeline
  • Integration Assets Reference
  • Build a Marketplace Offering
  • Create a Tile
  • Create an Integration Dashboard
  • Create a Recommended Monitor
  • Create a Cloud SIEM Detection Rule
  • OAuth for Integrations
  • Install Agent Integration Developer Tool
  • Submission - Agent Check
  • Submission - DogStatsD
  • Submission - API
  • JetBrains IDEs
  • Visual Studio
  • Account Management
  • Components: Common
  • Components: Azure
  • Components: AWS
  • AWS Accounts
  • Azure Accounts
  • Dashboard List
  • Interpolation
  • Correlations
  • Template Variables
  • Public Dashboards
  • Share Graphs
  • Scheduled Reports
  • Configure Monitors
  • Recommended Monitors
  • Audit Trail
  • Error Tracking
  • Integration
  • Live Process
  • Network Performance
  • Process Check
  • Real User Monitoring
  • Service Check
  • Search Monitors
  • Monitor Status
  • Check Summary
  • Monitor Settings
  • Monitor Quality
  • Host and Container Maps
  • Infrastructure List
  • Container Images View
  • Orchestrator Explorer
  • Kubernetes Resource Utilization
  • Increase Process Retention
  • Cloud Resources Schema
  • Metric Type Modifiers
  • Historical Metrics Ingestion
  • Submission - Powershell
  • OTLP Metric Types
  • Metrics Types
  • Distributions
  • Metrics Units
  • Advanced Filtering
  • Metrics Without Limits™
  • Impact Analysis
  • Faulty Deployment Detection
  • Managing Incidents
  • Natural Language Querying
  • Navigate the Service Catalog
  • Investigate a Service
  • Create Entries
  • Import Entries from Datadog
  • Import Entries from Integrations
  • Manage Entries
  • Service Definitions
  • Service Scorecards
  • Troubleshooting and Best Practices
  • Exploring APIs
  • Assigning Owners
  • Monitoring APIs
  • Adding Entries to API Catalog
  • Adding Metadata to APIs
  • API Catalog API
  • Endpoint Discovery from APM
  • Issue States
  • Default Grouping
  • Custom Grouping
  • Identify Suspect Commits
  • Monitor-based SLOs
  • Metric-based SLOs
  • Time Slice SLOs
  • Error Budget Alerts
  • Burn Rate Alerts
  • Incident Details
  • Incident Settings
  • Incident Analytics
  • Datadog Clipboard
  • Ingest Events
  • Arithmetic Processor
  • Date Remapper
  • Category Processor
  • Grok Parser
  • Lookup Processor
  • Service Remapper
  • Status Remapper
  • String Builder Processor
  • Navigate the Explorer
  • Customization
  • Notifications
  • Saved Views
  • Triaging & Notifying
  • View and Manage
  • Create notifications and tickets
  • Create a Case
  • Enterprise Configuration
  • Build Workflows
  • Authentication
  • Trigger Workflows
  • Workflow Logic
  • Data Transformation
  • HTTP Requests
  • Save and Reuse Actions
  • Connections
  • Actions Catalog
  • HTTP Request
  • JavaScript Expressions
  • Embedded Apps
  • Log collection
  • Tag extraction
  • Data Collected
  • Installation
  • Further Configuration
  • Integrations & Autodiscovery
  • Prometheus & OpenMetrics
  • Control plane monitoring
  • Data collected
  • Data security
  • Commands & Options
  • Cluster Checks
  • Endpoint Checks
  • Admission Controller
  • AWS Fargate
  • Duplicate hosts
  • Cluster Agent
  • HPA and Metrics Provider
  • Lambda Metrics
  • Distributed Tracing
  • Log Collection
  • Advanced Configuration
  • Continuous Profiler
  • Securing Functions
  • Deployment Tracking
  • OpenTelemetry
  • Libraries & Integrations
  • Enhanced Metrics
  • Linux - Code
  • Linux - Container
  • Windows - Code
  • Azure Container Apps
  • Google Cloud Run
  • Overview Page
  • Network Analytics
  • Network Map
  • DNS Monitoring
  • SNMP Metrics
  • NetFlow Monitoring
  • Network Device Topology Map
  • Google Cloud
  • Custom Costs
  • SaaS Cost Integrations
  • Tag Pipelines
  • Container Cost Allocation
  • Cost Recommendations
  • APM Terms and Concepts
  • Automatic Instrumentation
  • Custom Instrumentation
  • Library Compatibility
  • Library Configuration
  • Configuration at Runtime
  • Trace Context Propagation
  • Serverless Application Tracing
  • Proxy Tracing
  • Span Tag Semantics
  • Trace Metrics
  • Runtime Metrics
  • Ingestion Mechanisms
  • Ingestion Controls
  • Generate Metrics
  • Trace Retention
  • Usage Metrics
  • Correlate DBM and Traces
  • Correlate Synthetics and Traces
  • Correlate Profiles and Traces
  • Search Spans
  • Query Syntax
  • Span Facets
  • Span Visualizations
  • Trace Queries
  • Request Flow Map
  • Trace an LLM Application
  • Submit Evaluations
  • Service Page
  • Resource Page
  • Service Map
  • APM Monitors
  • Expression Language
  • Error Tracking Explorer
  • Exception Replay
  • Tracer Startup Logs
  • Tracer Debug Logs
  • Connection Errors
  • Agent Rate Limits
  • Agent APM metrics
  • Agent Resource Usage
  • Correlated Logs
  • PHP 5 Deep Call Stacks
  • .NET diagnostic tool
  • APM Quantization
  • Supported Language and Tracer Versions
  • Profile Types
  • Profile Visualizations
  • Investigate Slow Traces or Endpoints
  • Compare Profiles
  • Agent Integration Overhead
  • Setup Architectures
  • Self-hosted
  • Google Cloud SQL
  • Autonomous Database
  • Connecting DBM and Traces
  • Exploring Database Hosts
  • Exploring Query Metrics
  • Exploring Query Samples
  • Data Jobs Monitoring
  • Monitoring Page Performance
  • Monitoring Resource Performance
  • Collecting Browser Errors
  • Tracking User Actions
  • Frustration Signals
  • Crash Reporting
  • Mobile Vitals
  • Web View Tracking
  • Integrated Libraries
  • Supported Versions
  • Sankey Diagrams
  • Funnel Analysis
  • User Retention
  • Generate Custom Metrics
  • Connect RUM and Traces
  • Search RUM Events
  • Search Syntax
  • Watchdog Insights for RUM
  • Feature Flag Tracking
  • Track Browser Errors
  • Track Mobile Errors
  • Multistep API Testing
  • Recording Steps
  • Browser Testing Results
  • Advanced Options for Steps
  • Authentication in Browser Testing
  • Mobile App Testing
  • Testing Steps
  • Testing Results
  • Test Coverage
  • Connect APM
  • Search Test Batches
  • Search Test Runs
  • Testing Multiple Environments
  • Testing With Proxy, Firewall, or VPN
  • Azure DevOps Extension
  • CircleCI Orb
  • GitHub Actions
  • Results Explorer
  • AWS CodePipeline
  • Custom Commands
  • Custom Tags and Measures
  • Custom Pipelines API
  • Search and Manage
  • Search Test Runs or Pipeline Executions
  • CI Providers
  • Java and JVM Languages
  • JavaScript and TypeScript
  • JUnit Report Uploads
  • Tests in Containers
  • Developer Workflows
  • Code Coverage
  • Instrument Browser Tests with RUM
  • Instrument Swift Tests with RUM
  • Early Flake Detection
  • Auto Test Retries
  • How It Works
  • CircleCI Orbs
  • Generic CI Providers
  • Static Analysis Rules
  • GitHub Pull Requests
  • APM Deployment Tracking
  • Suppressions
  • Security Inbox
  • Threat Intelligence
  • Account Takeover Protection
  • Log Detection Rules
  • Signal Correlation Rules
  • Security Signals
  • Investigator
  • CSM Enterprise
  • CSM Cloud Workload Security
  • CSM Agentless Scanning
  • Detection Rules
  • Investigate Security Signals
  • Investigate Agent Events
  • Creating Agent Rule Expressions
  • CWS Events Formats
  • Manage Compliance Rules
  • Create Custom Rules
  • Manage Compliance Posture
  • Explore Misconfigurations
  • Signals Explorer
  • Identity Risks
  • Vulnerabilities
  • Agentless Scanning
  • Mute Issues
  • Automate Security Workflows
  • Create Jira Issues
  • Severity Scoring
  • Terms and Concepts
  • Using Single Step Instrumentation
  • Using Datadog Tracing libraries
  • Enabling ASM for Serverless
  • Code Security
  • User Monitoring and Protection
  • Custom Detection Rules
  • In-App WAF Rules
  • Trace Qualification
  • Attack Summary
  • Attacker Explorer
  • API Security Inventory
  • Datadog Agent
  • Splunk HTTP Event Collector
  • Splunk Forwarders (TCP)
  • Sumo Logic Hosted Collector
  • Update Existing Pipelines
  • Best Practices for Scaling Observability Pipelines
  • React Native
  • Other Integrations
  • Pipeline Scanner
  • Attributes and Aliasing
  • Rehydrate from Archives
  • PCI Compliance
  • Connect Logs and Traces
  • Search Logs
  • Advanced Search
  • Transactions
  • Log Side Panel
  • Watchdog Insights for Logs
  • Track Browser and Mobile Errors
  • Track Backend Errors
  • Manage Data Collection
  • Switching Between Orgs
  • User Management
  • Login Methods
  • Custom Organization Landing Page
  • Service Accounts
  • IP Allowlist
  • Granular Access
  • Permissions
  • User Group Mapping
  • Active Directory
  • API and Application Keys
  • Team Management
  • Multi-Factor Authentication
  • Cost Details
  • Usage Details
  • Product Allotments
  • Multi-org Accounts
  • Log Management
  • Synthetic Monitoring
  • HIPAA Compliance
  • Library Rules
  • Investigate Sensitive Data Issues
  • Regular Expression Syntax

Monitoring Available Disk Space

A common system metric to monitor is the available disk space on a given system or host. This guide will help you create a monitor alerting you when free disk space for a host falls below 10% for any host reporting to Datadog.

To create the monitor for available disk space:

In the navigation menu, click Monitors .

Click New Monitor .

Select Metric as the monitor type.

Define the metric:

In the Define the metric section, input system.disk.free and avg by host (Query a).

Click Add Query and input system.disk.total and avg by host (Query b).

In the formula that appears, replace the default a + b with a/b*100 .

Alernatively, you can use the Source tab and define the query as avg:system.disk.free{*} by {host} / avg:system.disk.total{*} by {host} * 100

You can choose to leave the default evaluation of 5 minutes, or choose a longer evaluation to avoid false alerts for temporary unavailable disk space.

Set alert conditions:

  • Choose below from the threshold options.
  • Enter 10 in the value field (Alerted when disk space falls below 10%)

Set notification options:

  • In Configure notifications & automations , specify the notification message. Include relevant details and a meaningful message template:

Set the rest of the notification options according to your preferences. Remember to save the monitor by clicking Create at the bottom of the page.

Request a personalized demo

Get started with datadog.

  • 23rd May 2024
  • Leonardo Royal Hotel London St Pauls
  • View Winners

Logo

  • Sponsor/Book a table
  • Download Sponsors Mediapack
  • Sponsors Toolkit
  • 2024 Winners
  • 2023 Winners
  • 2022 Winners
  • 2021 Winners
  • 2020 Winners
  • 2019 Winners
  • 2018 Winners
  • 2017 Winners
  • 2016 Winners
  • 2015 Winners
  • 2014 Winners
  • 2013 Winners

Winners and Runners Up

netapp change disk container

2024 Winners Guide

The 2024 Winners Guide is now available to view and download .

netapp change disk container

Colt Data Centre Services (Colt DCS), a global provider of hyperscale and large enterprise data centre solutions, published its latest Sustainability Highlight report covering the period of January 1st to December 31st, 2022. Launched as Colt DCS' highlights report, the report focuses on the three strategic areas of decarbonising the business, connecting people, and safeguarding the company's operations. Colt DCS has achieved a remarkable 52% reduction in Scope 1 and Scope 2 (market-based) emissions and a 28% reduction in Scope 3 emissions (compared to 2019). These significant reductions highlight the extend of the company's sustainable practices as an industry-leading data centre provider.

Some of the other key highlights from the Sustainability report include:

  • A 30% reduction in emissions across all Scopes compared to 2019
  • Achieved an impressive global Net Promoter Score (NPS) of 72
  • Procured 100% renewable energy in all European data centre sites

Colt DCS has been working diligently to achieve its ambitious sustainability targets, as part of its ongoing commitment to minimise environmental impact, promoting social responsibility and driving positive change within the global data centre industry. Sustainability is deeply ingrained within the company, with all stakeholders working tirelessly towards the business' shared ESG goals. Colt DCS' commitment to continuous improvement remains unwavering as they strive to advance and evolve their ESG strategy in the coming years.

The 2022 Sustainability Highlights report showcases the company's significant achievements in the areas of Environmental, Social and Governance (ESG) for 2022. It highlights Colt DCS' commitment to achieving reduced emissions in line with the SBTi's latest Net Zero Standard. EcoVadis partnership and a top 1% sustainability rating, Colt DCS continues to set data centre industry standards in its ESG practices. The 2022 Sustainability Highlights report was prepared in conjunction with the Colt Group, which comprises Colt Data Centre Services and Colt Technology Services.

Decarbonisation has been a primary focus for Colt DCS, and one of the key pillars in our ESG strategy. Its sustainability targets have been approved by the Science-based Targets initiative (SBTi) in alignment with the latest Net Zero Standard. Colt DCS has successfully reduced its carbon footprint by 30% compared to the 2019 baseline, amounting to an estimated 186,487 tonnes of CO2e (Carbon dioxide equivalent). This reduction has been aided by the use of 100% renewable energy across its UK and European data centres, all of which are 100% carrier-neutral sites, engaging suppliers more effectively and implementing innovative cooling technologies. Moreover, Colt DCS ensures that its new data centres, such as their ongoing expansion in Paris Frankfurt, India & Japan or across Europe and the Asia-Pacific, adhere to the environmental requirements set down in the Global Reference Design Document.

Colt DCS' ESG strategy goes beyond environmental sustainability and includes a strong focus on social engagement. The company recognises the importance stakeholder engagement across its value chain, including customers, suppliers, local communities and employees, with the aim of making a lasting positive impact in the regions it operates in. Colt DCS encourages employee engagement and has established partnerships with local charities, forming employee-led CSR teams to identify fundraising initiatives and volunteering opportunities. Local communities serve as contributors of Colt DCS' workforce, such as local contractors and local service providers who aid in the construction of the company's data centres.

The company recognises the importance of effective governance in achieving its goals for inclusion. During the pandemic, Colt DCS introduced designated Wellbeing Days, prioritising the mental and physical health of its workforce. The company's efforts saw 78% engagement with its People Matter survey, with the results highlighting strengths under Diversity & Inclusion, Customer Focus, Empowerment, Sustainable Engagement and Well-being & Stress.

Colt DCS is striving to create an inclusive culture that values diversity of thought and representative of the communities it operates in. In addition to fostering an inclusive working culture made up of diverse representation, the company is committing to implementing equitable business practices that enhance the employee experience. Moving forward, Colt DCS remains dedicated to investing in employee development and strengthening its partnerships with suppliers and customers to foster an inclusive and sustainable future.

In addition to prioritising stakeholder engagement, Colt DCS delivers exceptional client service across its data centre portfolio. In 2022, the company achieved an impressive global Net Promoter Score (NPS) of 72 across all customers in Europe and Asia, which is a testament to its goal of becoming the most trusted and customer-centric operator in the industry.

Since the 2022 report:

Colt DCS have achieved an NPS of 74

Colt DCS are now using 100% renewable energy in their Mumbai and Tokyo Shiohama sites.

netapp change disk container

  • Build Year: 2013.
  • 6 x two-story mountains halls
  • l 22 600 m2 (230000 ft2) space available.
  • Data rooms from 50m2 - 1550m2 & rack solutions available.
  • Flexible design which can be scaled up or down based on our clients' requirements.
  • Tier III certified but can tailor-make your data center requirement to any lower Tier standard.
  • 3 independent grid supplies each fed from multiple hydropower plants.
  • 100% renewable hydropower.
  • Fjord cooling solution.
  • Average PUE: 1.18
  • Automatic standby generators
  • 4/7 onsite security personnel
  • Remote monitoring 24/7.
  • Biometric access systems, man traps and intelligent video surveillance.
  • Fire suppression using hypoxic air ventilation system.
  • EMP Secure.
  • Carrier neutral data center - 19 independent fiber providers available.
  • Two fully redundant meet-me-rooms in different parts of the facility for carriers with diverse fiber paths to them.
  • Office space for clients.
  • Smart hands services.

netapp change disk container

Introduction

In the heady world of Data Centre cooling, our customers said that they wanted Resilience, Energy Efficiency, Scalability and Sustainability. This then formed the basis of a £3M research and development programme for our global engineering teams. They were tasked to deliver, and deliver they have, judging by the already high levels of customer uptake for the new Hydro Denco solution!

Design Principles

Some of the foregoing customer requirements, such as resilience and energy efficiency, can often be antagonistic, as a few Data Centre operators have already found out to their cost. If you have to factor in the cost of any complexity induced downtime to investment calculations, payback periods will quickly become infinite!

With this in mind, the Hydro Denco system differs from previous concepts in that it offers extremely energy efficient cooling, but without any of the complexity associated with other technologies. Hydro Denco meets the requirements of the most demanding designers but will also be safe if left in the hands of the lowest bidder for the maintenance contract!

How was Hydro Denco Achieved?

Firstly, we ruled out any mechanical refrigeration, so no compressors, refrigerants, or complex controls. This gets rid of all associated regulatory issues such as F Gas compliance, whilst also addressing environmental and sustainability concerns.

Then we decided that AC motors would be banished, such that everything would be driven by Electronically Commutated (EC) type motors. This not only maximises efficiency, but also facilitates speed control and hence exceptional energy savings, particularly at reduced loadings. Our cooling coils then went under the spotlight, adopting the latest technologies and manufacturing techniques to produce the ultimate in efficient heat exchangers.

Then we streamlined the air system to allow relatively high air volume flow rates without the penalty of high dynamic pressure losses, to cut theoretical fan power to an absolute minimum. Lastly, we looked at the water system, and threw out the control valves, the balancing valves and the flow measuring valves, in fact anything that caused a resistance to water flow! The result of all this was Hydro Denco.

How Does Hydro Denco Work?

A very high efficiency indoor cooling coil is supplied with glycol solution from a very high efficiency external heat rejection coil. The term High Efficiency can be interpreted in several ways. In this case, we mean that the cooling coil can supply air at a temperature only 2K higher than that of the entering water, without significant air or water side pressure losses. Similarly, the heat rejection coil can lower the water temperature to within 4K of the prevailing external ambient wet bulb temperature, such that there is only a 6K uplift between indoor and outdoor conditions. A small, EC pump runs only at a speed necessary to provide sufficient heat transfer to maintain Data Centre conditions.

Integral touch screen controls adjust the speed of the pump and external fans to maintain the desired supply air temperature, whilst the internal fans modulate to provide a constant temperature difference between supply and return air.

Sounds Somewhat Simple - What are the Limitations?

In order to obviate the requirement for any mechanical refrigeration, the Data Centre needs to be located where the maximum external wet bulb temperature is 6K or more below the maximum desired supply air temperature to the servers. Even for a conservative 28 O C supply temperature, the required external wet bulb of 22 O C would permit deployment anywhere in Northern Europe and many other locations worldwide.

Ease of Installation and Commissioning

Simply run flow and return pipework, together with power and a control pair between indoor and outdoor units, to provide maximum resilience through multiple, autonomous systems.

As an example, a 200kW cooling system would require only DN80 piping and the controls read water temperatures and flow rate for ease of commissioning.

The Proof of the Pudding

Hydro Denco units with sensible cooling capacities of up to 250kW are already installed around the world, achieving cooling partial Power Utilisation Effectiveness (pPUE) figures of up to 1.15 in peak summer, reducing to around 1.05 during winter and some mid-season conditions. With an annualised pPUE of around 1.1, part load performance gets somewhat better, as thanks to the cube law, a system running at only 80% design cooling capacity will require only 50% of the design energy input and running at 50% design cooling capacity this falls to a mere 20% of energy input. You don't have to take our word for all this either, as Hydro Denco units incorporate their own energy monitoring systems within the on-board controls!

Benefits for the UK Grid

Data Centres are notoriously low-key buildings that rarely like to advertise their presence and probably for this reason, estimates of total energy draw from the UK grid vary considerably. It is however generally agreed that the figure lies somewhere between 5% and 10% of overall UK electricity consumption and this will only increase over time. A recent project involved a London Data Centre using 40% of the server power for the cooling system, a pPUE of 1.4 which is quite typical of legacy installations. The deployment of a Hydro Denco system reduced this pPUE to an average of 1.12, which represents a reduction of some 28% in terms of the cooling system energy consumption, and nearly 10% in terms of overall site energy consumption. We therefore think it reasonable to conclude that the potential energy saving for the installed base would be somewhere between 0.5% and 1% of UK energy consumption!

For all the reasons outlined previously, we consider Hydro Denco to be a worthy candidate for DCS Cooling Innovation of the Year, and sincerely hope that you will agree!

A shot in the arm, such as a DCS award would give the project much increased momentum, not just for the delight of FlaktGroup, but also allowing the data centre industry as a whole to benefit from the associated environmental and sustainability benefits.

netapp change disk container

Digital Realty, the largest global provider of cloud- and carrier-neutral data center, colocation and interconnection solutions, announced the continued momentum of ServiceFabric&trade, its service orchestration platform that seamlessly interconnects workflow participants, applications, clouds and ecosystems on PlatformDIGITAL®, its global data center platform. Following the recent introduction of Service Directory, a central marketplace that allows Digital Realty partners to highlight their offerings, over 70 members have joined the directory and listed more than 100 services, including secure and direct connections to over 200 global cloud on-ramps, creating a vibrant ecosystem for seamless interconnection and collaboration. Service Directory is a core component of the ServiceFabric&trade product family that underpins Digital Realtys vision for interconnecting global data communities on PlatformDIGITAL® and enabling customers to tackle the challenges of Data Gravity head-on. Chris Sharp, Chief Technology Officer, Digital Realty, said: ServiceFabric&trade is redefining the way customers and partners interact with our global data center platform. By fostering an open and collaborative environment, we're empowering businesses to build and orchestrate their ideal solutions with unparalleled ease and efficiency. The need for an open interconnection and orchestration platform is critical as an enabler for artificial intelligence (AI) and high-performance compute (HPC), especially as enterprises increasingly deploy private AI applications, which rely on the low latency, private exchange of data between many members of an ecosystem. PlatformDIGITAL® was chosen to be the home of many groundbreaking AI and HPC workloads and ServiceFabric&trade was designed with the needs of cutting-edge applications in mind. A key differentiator is Service Directorys Click-to-Connect capability, which allows customers to orchestrate and automate on-demand connections to the services they need, significantly streamlining workflows and removing manual configuration steps. With Click-to-Connect, ServiceFabric&trade users can: • Generate secure service keys, granting controlled access to resources and partners with customisable security parameters • Automate approval workflows and facilitate connections to Service Directory, paving the way for seamless interconnectivity • Initiate service connections, significantly streamlining workflows between partners and customers • Integrate seamlessly with Service Directory, creating a unified experience for discovery, connection, and orchestration Leading companies are embracing the power of ServiceFabric&trade Service Directory including: We are excited to extend our collaboration with Digital Realty by bringing together the automation capabilities of PlatformDigital® and Console Connect. The availability of Console Connect on the ServiceFabric&trade Service Directory provides Digital Realty customers with on-demand access to our full global ecosystem, while enabling Console Connect users to seamlessly interconnect to more Digital Realty locations worldwide. We share Digital Realtys vision of building a more open and interconnected digital ecosystem. - Michael Glynn, SVP of Digital Automated Innovation, Console Connect ServiceFabric&trade Service Directory is a great fit for our global digital infrastructure platform. It can enable our customers and partners to connect faster and more easily via Colt, regardless of their location. - Mark Hollman, Vice President for Partner Development and Success, Colt Technology Services We firmly believe ServiceFabric&trade Service Directory has the potential to transform the way businesses connect and consume cloud services. As Fortinet continues to work closely with DigitalRealty to expand its global cloud network of SASE locations, the opportunity to grow our strategic partnership by offering additional Fortinet cloud-delivered security and connectivity solutions as part of an on-demand service will help an even broader number of customers seamlessly secure their business-critical data, devices, and applications. - Michael Xie, Founder, President, and Chief Technology Officer, Fortinet ServiceFabric&trade Service Directory provides us with a unique platform to showcase our high-performance compute and automated network solutions to a global audience of Digital Realty customers. We're confident it will help us expand our reach and accelerate our growth. - Richard Nicholas, SVP Strategy and Corporate Development, Hivelocity Digital Realty's ServiceFabric&trade has emerged as an essential feature of its portfolio. As one of the leading global providers of data center, colocation and interconnection services, Digital Realty enables agile digital infrastructure capability for enterprises and the IT ecosystem. Enabling real time interconnection orchestration among IT partners and vendors is one the most important requirements for customers leveraging colocation facilities. The continued growth and expansion of ServiceFabric is a testament to the relevance of this platform and the vision of Digital Realty. - Courtney Munroe, RVP, IDC With ServiceFabric&trade Service Directory, businesses can access and use cloud services in a new and dynamic way. Lumen's Network-as-a Service solution leverages this platform to deliver on-demand experiences that address the most urgent business needs. We are excited to be part of this innovative platform that transforms the cloud landscape and helps us build on our vision to deliver a frictionless digital experience for our customers. - Satish Lakshmanan, Lumen Chief Product Officer ServiceFabric&trade Service Directory aligns perfectly with our commitment to providing hybrid and multi-cloud products that are easy to deploy and use. It's a valuable tool for our customers and partners - Lance Weaver, Chief Product and Technology Officer, Private Cloud, Rackspace As the cloud leader in emerging markets, the entire Zenlayer team is looking forward to helping Digital Realty customers extend their reach into LATAM, EMEA, and Asia Pacific via ServiceFabric&trade and extended customer self-serve enablement through Service Directory. - Craig Kaplan, SVP of Sales, Zenlayer Concluded Sharp, Offering access to a dense and highly connected data community, ServiceFabric&trade is poised to become the go-to platform for businesses seeking to build and manage their solutions in an open, secure, and efficient way. We are continuously evolving and expanding our platform to meet the next data exchange and evolving traffic patterns driven by hybrid, multi-cloud, and AI-workload solutions.

netapp change disk container

As a data centre developer and operator with a large presence in south Wales, Vantage Data Centers has a longstanding relationship with Virgin Media Business Wholesale, a leading provider of fibre network solutions.

Together, we recognised the increasing demand for high bandwidth services in the region driven by edge and emerging technologies like artificial intelligence (AI). This led to upgrading network infrastructure to accommodate the immediate and future needs of customers wanting to leverage these technologies. Virgin Media O2 has committed to invest at least £10 billion in the UK over the next few years. This investment brings together next-generation fibre infrastructure and 5G services while expanding network coverage across new parts of the country.

Central to this ambition, Virgin Media Business Wholesale was quick to make south Wales an early priority. South Wales offers businesses easy access to the UK market, as well as further afield, and the Welsh Government is digitally progressive, providing one of Europe's most comprehensive support systems for industrial research and experimental development. South Wales is viewed as the UK's second major data centre region by major hyperscalers and global digital organisations.

An early beneficiary of Virgin Media Business Wholesale's M4 Severn Bridge cable upgrade is Vantage Data Centers' Cardiff, Wales campus (CWL1) and its growing base of customers. This latest development supports the secure high-speed data transmission of Vantage CWL1 customer's servers running day-to-day applications, critical HPC systems and the hosting of private and hybrid clouds.

Vantage Data Centers' CWL1 is one of Europe's largest data centre campus. The 148MW, 186,000 square metre site is located near Cardiff and provides world-class facilities for hyperscale and colocation customers. In addition to serving hyperscale cloud provider organisations, Vantage's CWL1 campus offers data centre services of 2kW to 125kW densities per rack to wholesale customers and enterprise clients requiring IT hosting, private data halls and colocation solutions in the UK. CWL1 has been a strategic point-of-presence (PoP) on the Virgin Media Business Wholesale network for more than a decade, and the Severn Bridge cable upgrade marks a new era in the longstanding partnership between the two companies.

To perform a major fibre upgrade to meet the increasing capacity and shorter project delivery time requirements of Virgin Media Business Wholesale's IT solutions partners located on either side of the River Severn estuary, commencing with Vantage Data Centers' CWL1 campus near Cardiff.

To reinforce Vantage CWL1's continuous commitment to future-proofing its customers' overall network capacity needs by ensuring the widest possible choice of diverse, secure, and resilient high quality connectivity.

To enable further high-speed diverse and resilient fibre network routes for Wales-based private and public sector businesses looking to boost productivity and efficiency The Innovation In 2021, faced with growing demand from business and government organisations based in South Wales for increased bandwidth and resilience, Virgin Media Business Wholesale recognised the strategic importance of boosting its existing fibre capacity for businesses and public organisations depending on reliable and secure data transmission between south Wales and the rest of the UK.

The project started in January 2022 and involved increasing Virgin Media Business Wholesale's existing cable capacity over the Prince of Wales M4 Severn Bridge by deploying new specification fibre with a high fibre count. Overcoming often challenging and hazardous conditions, including engineers working at height in very space-constrained environments on the bridge itself. The additional fibre cabling was successfully connected to Vantage's CWL1 campus in January 2024.

What are your product's/solution's key distinguishing features and/or USP?

Adds vital additional capacity to the region to ensure shorter delivery times for customer projects

Additional diverse and resilient network routes for businesses that depend on reliable and secure data transmission between south Wales and the rest of the UK

What tangible impact has your product/solution had on the market and your customers?

Historically, capacity bottlenecks have hindered business operations due to inadequate infrastructure. Our solution addresses this long-standing issue head-on. The new fibre provides a boost for wholesale partners on both sides of the River Severn estuary by connecting subsea landing stations to the M4 corridor and London. The ability to ensure highly diverse, resilient, secure and low latency network connectivity is key to the overall service offering of any modern data centre. In the case of Vantage's CWL1 campus, as a regional Internet hub and major communications node for Wales and the west of England, it is an absolute prerequisite. With the growing numbers of organisations in Wales, England and globally choosing to migrate their IT workloads to Vantage's world-class CWL1 campus, this development holds significant strategic value for its customers.

This upgrade allows Vantage CWL1 customers to benefit from both dark fibre and managed 'lit' services and access to the 190,000 km Virgin Media Business Wholesale's fibre network. The capacity of the pre-existing fibre link to Vantage CWL1 has more than doubled, with a total of 96 fibres now available. This expansion ensures robust and scalable connectivity for years to come supporting global hyperscalers, major multinationals and also south Wales's growing community of businesses in various sectors such as fintech, engineering and life sciences.

What are the major differentiators between your product/solution and those of your primary competitors?

As one of the UK's leading backhaul infrastructure providers, this project innovation demonstrates Virgin Media Business Wholesale's capability to build high-bandwidth, low-latency, resilient network infrastructure at scale.  With the largest available dark fibre coverage and the second-largest fixed network spanning over 190,000km, Virgin Media Business Wholesale stands out from competitors by offering dark fibre across its national network without regional restrictions. By unlocking the capacity across the bridge, Virgin Media Business Wholesale allows customers to fully experience the benefits of this investment.  Please supply any supportive quotes and/or case study materials to demonstrate the value of this product/solution to your customers/partners Virgin Media's major fibre capacity boost at CWL1 contributes significantly to Vantage's ongoing commitment to providing totally future-proofed, highly-scalable data centre solutions to customers. Not only will hyperscalers and multinational systems integrators benefit, but so will the many small and medium enterprises, fintech, pharmaceuticals, government, retail and engineering organisations that underpin Wales's thriving digital economy. Justin Jenkins, Chief Operating Officer, EMEA, Vantage Data Centers 'Building out our fibre from Vantage's CWL1 campus is one of the significant investments Virgin Media O2 has made as we continue our mission to upgrade infrastructure across the country. We're confident this additional capacity will support a range of customers connecting into south Wales. It's projects like this that underscore our ability to build high bandwidth, low latency network infrastructure at the scale needed to drive digital transformation in the UK' John Chester, Director of Wholesale Fixed, Virgin Media O2 Business

netapp change disk container

1. What are your company's key distinguishing features and/or USP?

OryxAlign is a data centre focused MSP with a unique approach to customer engagement and service delivery. We offer a holistic set of in-demand services, ensuring our customers' data centre facilities are technologically advanced, cyber resilient and operationally efficient:

a) 24x7x365 critical infrastructure support b) Monitoring and management c) Secure network infrastructure design d) Build and implementation e) Managed IT procurement f) New build consultancy g) Programme management

The highly resilient IT infrastructure we design, implement, support and manage is a fundamental component of any facility - and essential for maintaining critical services such as BMS, PMS, CCTV, security and access control. Service resilience and continuity are vital for successful data centre operations, so we place an emphasis on ensuring our technology meets the highest standards and demands of these mission-critical facilities. Our managed IT services ensure these operations run smoothly and continuously.

We assume the responsibility for maintaining and managing the data centre networks, but also delivering proactive and user-centric IT support services. This ensures staff, facilities management and critical service partners on-site have uninterrupted access to the technology they require. Valued components of our user focused managed IT support services:

a) Desk-side support b) Cyber awareness training c) Tech workshops d) E-learning platform to enhance productivity e) Application development f) Procurement services

We've spent decades actively listening to our clients, understanding their challenges and delivering transformative outcomes successfully. We've found that our unique approach of enterprise project delivery combined with client intimacy is why our customers continue to work with us. We believe ambitious organisations should be empowered by great technology that can transform operations, mitigate risks and propel business forward.

2. What tangible impact has your company had on the market and your customers?

Our experience in the data centre industry spans a period of 13 years. During that time, we have been engaged on 16 new data centre build projects, and we currently support the critical IT infrastructure for data centres with a combined total of just under 250 MW of capacity. We presently work with seven data centre operators across the UK, Ireland, and mainland Europe.

This includes VIRTUS Data Centres (VIRTUS), the UK's largest data centre operator. We've held a strategic partnership with VIRTUS over the last 10 years and have been instrumental in their rapid expansion from 1 to 11 data centres, thereby securing their market-leading position of today. We have delivered extensive project work and successful organisational change for VIRTUS over the course of the partnership.

From the IT design, build and implementation of several of its data centres, a programme of managed infrastructure lifecycle replacement through to the implementation of a MDR security platform and SOC services, which has transformed the way the business operates and communicates. All of these advancements and projects, have been managed, supported and maintained by OryxAlign's Managed Services division once they have been implemented.

VIRTUS Data Centres (VIRTUS) Testimonial: &ldquoOryxAlign's holistic approach has proven to be a successful formula to help enable us to scale our data centre presence and operations. Whether it's critical infrastructure upgrades, everyday support, or providing solution design and implementation services, working with the OryxAlign team is a pleasure. Having worked closely with them over the past 10 years, we were happy to recently extend the strategic partnership for a further 3 years,&rdquo Richard Owen, IT Manager at VIRTUS Data Centres.

3. What levels of customer service differentiate you from your competitors?

Proof of our customer service levels can be seen in our key performance indicators for January 2024 across our data centre clients:

Critical Response SLA (100%), Critical Resolution SLA (100%), Infrastructure availability SLA (100%), Overall Response SLA (Critical, High, Medium, Low) (95%), Overall Resolution SLA (Critical, High, Medium, Low) (93%), Overall Client Sentiment (95%)

Our managed IT services satisfy customers and differentiate us from competitors. These include automated infrastructure monitoring for swift issue resolution, proactive Digital Employee Experience (DEX) management, and seamless system integration allowing ticket raising from internal systems.

We also offer solution design guidance covering critical infrastructure, cybersecurity, and telecommunications, and our dedicated specialists facilitate procurement. Additionally, clients benefit from extensive e-learning resources (150 courses), monthly educational sessions, managed cyber awareness training, and support for achieving and maintaining compliance certifications like Cyber Essentials Plus and ISO.

4. What are the major differentiators between your company and your primary competitors?

OryxAlign's success hinges not only on our technical solutions and services but also on our people and organisational structure. Our people-centric approach is rooted in core values: caring, striving, supporting, trusting, and enjoying. These define our ethos and cultivate enduring bonds with our employees and valued clients. Key features of our people-focused structure:

i. Empowered IT service and client-facing teams: Enables swift problem resolution, reducing delays and improving client satisfaction. ii. Continuous training and development: Investing in regular upskilling and training ensures our teams stay updated on the latest technologies, methodologies and best practices. iii. Clear communication channels: Our flat organisational structure fosters open communication across levels, from Service Desk to CEO. This facilitates agile service support and decision-making, with control measures in place. iv. Collaborative environments: Cross-functional teams collaborate to deliver holistic IT solutions, addressing client infrastructure needs.

We recognise our actions as a business affect the world around us, so we dedicate time, effort and talent to ensure we're doing all we can to positively impact the environment, our workers, our community and beyond:

a) Our Environmental, Social, and Governance (ESG) strategy prioritises sustainability, inclusivity, and decision-making accountability. b) Recognition for ESG commitment: Won supply-chain award in December 2022. Finalist in CRN Tech Sustainability Awards in January 2024. c) Company culture: Rewards dedication, trust, and loyalty which fosters an engaging, transparent, fun and supportive work environment. d) Award-winning corporate culture: Won Corporate Culture of the Year 2023 at IT Europa Channel Awards in June 2023. Recognised for mental health awareness training, supporting staff during the pandemic and cost-of-living crisis, and benefits including private healthcare, pension contribution and support with energy bills. e) Governance excellence: Regularly reviewed governance arrangements, ISO9001, ISO27001, and Cyber Essentials Plus certified for five years. f) Diversity and promotion: 50% female senior leaders, 25% internal promotions, and all line managers trained in performance management and mental health support. g) Investment in people: Continuous training and development of highly-skilled professionals focused on client needs.

CBRE Testimonial: &ldquoOryxAlign has been a preferred service delivery partner to CBRE since 2019 - engaging in technology projects and maintenance services in our Data Centre Solutions (DCS) and wider Global Workspace (GWS) divisions. &ldquoI'm always impressed with OryxAlign's responsiveness, willingness to help, and industry-specific technical and operational knowledge. No challenge seems too complex - a refreshing attitude in our world of critical property infrastructure and data centre technology and services. &ldquoOryxAlign's internal teams (projects, hardware maintenance, IT support, ITAD, network support, technology compliance consultancy and onsite engineering teams) are personable and highly skilled, regularly exceeding industry standards. The team always addresses technical and non-technical personnel, demonstrating the issues, explaining how they can be resolved and prevented from occurring again, which is a rare skill. &ldquoOryxAlign, currently supports and manages technology networks at several of our critical data centres in the UK and mainland Europe, as well as various multi-tenanted properties throughout the UK. The team at OryxAlign is an asset to CBRE and I would consider them to be a true, trusted technology partner which we look forward to growing with,&rdquo Karl Marko CDMP, Global Associate Director, Technology CBRE (GWS and Data Centre Solutions).

netapp change disk container

CBRE Data Centre Solutions is a specialist business group within CBRE that helps customers reduce risk and efficiently manage their data centre facilities. Through technology, expert people, best in class processes, training, and global scale, we lead the market in our ability to deliver optimal facilities performance for our customers data centres. Our customers benefit from the experience and best practice we have developed through operating FM in over 900 data centres across 46 countries. At CBRE, our people are the most important element of our business. We pride ourselves on offering the best skills, training, and development programs to our staff. It starts with recruiting the best people and offering them industry-leading training programs. We ensure professional development through coaching and offer significant career progression opportunities and rewards. Our team comprises over 4000 Data Centre Experts, and we are proud to provide them with double the industry average for training and development. This commitment results in the most highly qualified team, which directly contributes to our impressive retention rate of 96%. To ensure we have the leading experts in the data centre industry, our bespoke DCS Shield training program not only focuses on training our existing team but also facilitates the transition of engineering staff from other critical industries such as healthcare, pharma, life sciences, and financial services. This is particularly crucial as it helps alleviate the shortage of skilled data centre engineers by nurturing talent from related fields. DCS Shield Training: Our DCS Shield program is an integrated initiative encompassing Recruitment, Training, Development, and rewards. The core components of CBRE&rsquos tailored training program are outlined below. Recruitment - DCS Select CBRE's proprietary recruiting tool, DCS Select, was developed in 2017 with Gartner CEB to assess the aptitudes and behaviours needed for success in CBRE Data Centre Solutions for Engineers and Technicians. The globally scalable tool was created through workshops, interviews, and assessments in partnership with Gartner CEB and their staff psychologists. It standardizes processes for candidate assessment, skills gap analysis, and development planning and provides branded reports for external hires and transitions. DCS Select ensures cultural fit, increasing attrition and operational focus. - Expending the Talent Pool Sourcing talent for a rapidly growing business within the data centre industry is a challenge. To confront this at CBRE we seek to attract talent from a more diverse range of sources, including veterans, maternity returners, and by recruiting outside of the industry. Training We have developed an industry leading suite of training and development courses which help to improve not only the performance of our people but also provides them with additional skills and experience to further their career at CBRE and beyond. - CNet Competency and Confidence Assessment Modelling (CCAM&reg) CBRE utilises the CCAM&reg Tool to establish and assess the baseline knowledge for each of our data centre technicians, and to monitor their improvement over time. This tool provides real-time analysis of both competence and confidence for individuals and teams and exposes root causes of employee behaviour (positive and negative) in data centre facilities. Its complex software works through various criteria to identify people risk. It maps out an individual&rsquos skill sets, knowledge base, and ability gaps. The results of each assessment allow the right course of development action to be planned and taken to address individuals&rsquo weaknesses. - Human Factors Training Research demonstrates that no matter how resilient the Engineering Infrastructure appears, nearly 70% of preventable failures relate to human error or process failure. CBRE has invested significant resources to combat human factor related errors by engaging renowned psychologist Dr Tim Marsh PhD, MSc, CFIOSH, CPsychol, SFIIRSM MD to further develop the CBRE Data Centre Solutions Human Factors Training program. This program is designed to identify, analyse and improve specific human behaviours, which impact on the successful delivery of engineering related services within a critical environment. CBRE have implemented this training for over five years, and the results speak for themselves, in 2019 only 16% of the unplanned downtime was related to human error versus an industry average of 70%. - Data Centre Technician Professional (CDCTP&reg) &amp Data Centre Management Professional (CDCMP&reg) Certifications CBRE is the first company in the industry globally to commit to certify 100% of its global technical workforce. Through a strategic alliance with industry-leading technical education company, CNet Training, we deliver a comprehensive training and development program that requires each data centre technicians to achieve the highly respected Certified Data Centre Technician Professional (CDCTP&reg) certification. - Training Matrix In 2021 CBRE introduced a Training Matrix for ALL roles. So, whether you are a Technician, a Contract Manager or a Contract Support there is a consistent programme of training assigned to you, providing you technical and soft skills training. We are also invested more into management development, succession planning and talent management so that we can assist our staff nurture their careers at CBRE. Case studies in supporting document. Professional Development - Talent Coach Talent Coach is CBRE's new talent management platform that supports CBRE's objective to attract, develop and retain top talent and allows us to combine multiple systems and processes into a single, integrated tool with mobile capabilities. It also allows scalability, sustainability, ease of use and greater engagement. Talent Coach has changed the way our people learn, grow and develop their careers, both personally and professionally. Talent Coach also acts as a means of testing whether employees have completed all mandatory training. - Professional Development Plans Our development programs are for all team members, including contract-based and central support staff, offering opportunities for professional growth, further education, and rewards for initiative. The training ensures administrative roles can advance into senior management or specialist technical roles. The curriculum is customised to individual roles and interests. Advancement &amp Reward - Opportunities for Career Progression &amp Mobility At CBRE we prioritise internal talent to deliver on career development and mobility for our people. CBRE is a large, specialised company that is growing rapidly with significant potential for advancement. - Employee wellbeing &amp Reward We understand that a healthy and happy workforce is the key to success, which is why we have implemented a global initiative called Be Well. This comprehensive program offers a wide range of resources, activities, training, and programs to support employee well-being on a global scale. Our employees have also access to a 24/7 Employee Assistance Program that provides a lifeline for those who may be facing challenges in their personal or professional lives. Recognizing and rewarding outstanding performance is also an integral part of our company culture. Our Reward and Benefits philosophy includes site-specific rewards and recognition strategies that go beyond the ordinary. From 'on the spot awards' to 'Team of the Quarter' nominations, and even an 'Exceptional Award', we make sure our employees feel valued and appreciated for their hard work. We also offer good pay and benefits. - Diversity, Equity and Inclusion, a Top Priority CBRE strives for a work environment that reflects the diversity of the clients we serve, provides everyone with the opportunity to succeed, values the differences of each individual and recognises their contributions. Our Diversity, Equity &amp Inclusion (DE&ampI) mission brings our global DE&ampI framework to our clients, ensuring that our joint diversity goals are one of the foundations of our partnership. More info on our CBRE Internal DE&ampI Networks in the supporting document. - Apprentices Our organisation hires over 60 apprentices every year from the local community. We see apprenticeships as a great way to help with succession planning and engaging new talent into the data centre industry. All Apprenticeships at CBRE are also supported by nationally recognised training providers, so they gain maximum exposure to industry knowledge and the skills required to be successful. - College Partnerships CBRE has several partnerships with Heathrow UTC, Kingston College, Uxbridge and Hillingdon College to train young people as data centre engineers with mechanical and electrical engineering skills, knowledge of IoT devices, cabling infrastructure and project management. More info on our college partnerships in the supporting document.

Bridgeworks Ltd

360 Video Gallery

Sponsors for 2024.

Centiel

Category Details

 alt=

Nominate in this category

Sponsor Details

Visit Website

Click here to vote for us

Entry Details:

Supporting Documents:

None provided

Please provide your details

Your details, entry details, supporting documents, entry reason, entry summary.

  • Documentation
  • OpenShift Container Platform 4.15 4.14 4.13 4.12 4.11 4.10 4.9 4.8 4.7 4.6 4.5 4.4 4.3 4.2 4.1 3.11 3.10 3.9 3.7 3.6 3.5 3.4 3.3 3.2 3.1 3.0
  • Configuring persistent storage
  • Persistent storage using local storage
  • Persistent storage using LVM Storage
  • Learn more about OpenShift Container Platform
  • About OpenShift Kubernetes Engine
  • Legal notice
  • OpenShift Container Platform 4.15 release notes
  • Kubernetes overview
  • OpenShift Container Platform overview
  • Web console walkthrough
  • Command-line walkthrough
  • Architecture overview
  • Product architecture
  • Installation and update
  • Red Hat OpenShift Cluster Manager
  • About the multicluster engine for Kubernetes Operator
  • Control plane architecture
  • NVIDIA GPU architecture overview
  • Understanding OpenShift development
  • Red Hat Enterprise Linux CoreOS
  • Admission plugins
  • Installation overview
  • Selecting an installation method and preparing a cluster
  • Cluster capabilities

About disconnected installation mirroring

Creating a mirror registry with mirror registry for Red Hat OpenShift

  • Mirroring images for a disconnected installation
  • Mirroring images for a disconnected installation using the oc-mirror plugin
  • Preparing to install on Alibaba Cloud
  • Creating the required Alibaba Cloud resources
  • Installing a cluster quickly on Alibaba Cloud
  • Installing a cluster on Alibaba Cloud with customizations
  • Installing a cluster on Alibaba Cloud with network customizations
  • Installing a cluster on Alibaba Cloud into an existing VPC
  • Installation configuration parameters for Alibaba Cloud
  • Uninstalling a cluster on Alibaba Cloud
  • Preparing to install on AWS
  • Configuring an AWS account
  • Installing a cluster quickly on AWS
  • Installing a cluster on AWS with customizations
  • Installing a cluster on AWS with network customizations
  • Installing a cluster on AWS in a restricted network
  • Installing a cluster on AWS into an existing VPC
  • Installing a private cluster on AWS
  • Installing a cluster on AWS into a government region
  • Installing a cluster on AWS into a Secret or Top Secret Region
  • Installing a cluster on AWS into a China region
  • Installing a cluster on AWS using CloudFormation templates
  • Installing a cluster on AWS in a restricted network with user-provisioned infrastructure
  • Installing a cluster on AWS with compute nodes on AWS Local Zones
  • Installing a cluster on AWS with compute nodes on AWS Wavelength Zones
  • Installing a cluster on AWS with compute nodes on AWS Outposts
  • Installing a three-node cluster on AWS
  • Uninstalling a cluster on AWS
  • Installation configuration parameters for AWS
  • Preparing to install on Azure
  • Configuring an Azure account
  • Enabling user-managed encryption on Azure
  • Installing a cluster quickly on Azure
  • Installing a cluster on Azure with customizations
  • Installing a cluster on Azure with network customizations
  • Installing a cluster on Azure into an existing VNet
  • Installing a private cluster on Azure
  • Installing a cluster on Azure into a government region
  • Installing a cluster on Azure in a restricted network with user-provisioned infrastructure
  • Installing a cluster on Azure using ARM templates
  • Installing a cluster on Azure in a restricted network
  • Installing a three-node cluster on Azure
  • Uninstalling a cluster on Azure
  • Installation configuration parameters for Azure
  • Preparing to install on Azure Stack Hub
  • Configuring an Azure Stack Hub account
  • Installing a cluster on Azure Stack Hub with an installer-provisioned infrastructure
  • Installing a cluster on Azure Stack Hub with network customizations
  • Installing a cluster on Azure Stack Hub using ARM templates
  • Installation configuration parameters for Azure Stack Hub
  • Uninstalling a cluster on Azure Stack Hub
  • Preparing to install on GCP
  • Configuring a GCP project
  • Installing a cluster quickly on GCP
  • Installing a cluster on GCP with customizations
  • Installing a cluster on GCP with network customizations
  • Installing a cluster on GCP in a restricted network
  • Installing a cluster on GCP into an existing VPC
  • Installing a cluster on GCP into a shared VPC
  • Installing a private cluster on GCP
  • Installing a cluster on GCP using Deployment Manager templates
  • Installing a cluster into a shared VPC on GCP using Deployment Manager templates
  • Installing a cluster on GCP in a restricted network with user-provisioned infrastructure
  • Installing a three-node cluster on GCP
  • Installation configuration parameters for GCP
  • Uninstalling a cluster on GCP
  • Preparing to install on IBM Cloud
  • Configuring an IBM Cloud account
  • Configuring IAM for IBM Cloud
  • User-managed encryption
  • Installing a cluster on IBM Cloud with customizations
  • Installing a cluster on IBM Cloud with network customizations
  • Installing a cluster on IBM Cloud into an existing VPC
  • Installing a private cluster on IBM Cloud
  • Installing a cluster on IBM Cloud in a restricted network
  • Installation configuration parameters for IBM Cloud
  • Uninstalling a cluster on IBM Cloud
  • Preparing to install on Nutanix
  • Fault tolerant deployments
  • Installing a cluster on Nutanix
  • Installing a cluster on Nutanix in a restricted network
  • Installing a three-node cluster on Nutanix
  • Uninstalling a cluster on Nutanix
  • Installation configuration parameters for Nutanix
  • Preparing to install on bare metal
  • Installing a user-provisioned cluster on bare metal
  • Installing a user-provisioned bare metal cluster with network customizations
  • Installing a user-provisioned bare metal cluster on a restricted network
  • Scaling a user-provisioned installation with the bare metal operator
  • Installation configuration parameters for bare metal
  • Installing an on-premise cluster using the Assisted Installer
  • Preparing to install with Agent-based Installer
  • Understanding disconnected installation mirroring
  • Installing a cluster with Agent-based Installer
  • Preparing PXE assets for OCP
  • Preparing an Agent-based installed cluster for the multicluster engine for Kubernetes
  • Installation configuration parameters for the Agent-based Installer
  • Preparing to install OpenShift on a single node
  • Installing OpenShift on a single node
  • Prerequisites
  • Setting up the environment for an OpenShift installation
  • Postinstallation configuration
  • Expanding the cluster
  • Troubleshooting
  • Installation workflow
  • Preparing to install on IBM Z and IBM LinuxONE
  • Installing a cluster with z/VM on IBM Z and IBM LinuxONE
  • Restricted network IBM Z installation with z/VM
  • Installing a cluster with RHEL KVM on IBM Z and IBM LinuxONE
  • Restricted network IBM Z installation with RHEL KVM
  • Installing a cluster in an LPAR on IBM Z and IBM LinuxONE
  • Restricted network IBM Z installation in an LPAR
  • Installation configuration parameters for IBM Z and IBM LinuxONE
  • Preparing to install on IBM Power
  • Installing a cluster on IBM Power
  • Restricted network IBM Power installation
  • Installation configuration parameters for IBM Power
  • Preparing to install on IBM Power Virtual Server
  • Creating an IBM Power Virtual Server workspace
  • Installing a cluster on IBM Power Virtual Server with customizations
  • Installing a cluster on IBM Power Virtual Server into an existing VPC
  • Installing a private cluster on IBM Power Virtual Server
  • Installing a cluster on IBM Power Virtual Server in a restricted network
  • Uninstalling a cluster on IBM Power Virtual Server
  • Installation configuration parameters for IBM Power Virtual Server
  • Preparing to install on OpenStack
  • Preparing to install a cluster that uses SR-IOV or OVS-DPDK on OpenStack
  • Installing a cluster on OpenStack with customizations
  • Installing a cluster on OpenStack on your own infrastructure
  • Installing a cluster on OpenStack in a restricted network
  • OpenStack Cloud Controller Manager reference guide
  • Deploying on OpenStack with rootVolume and etcd on local disk
  • Uninstalling a cluster on OpenStack
  • Uninstalling a cluster on OpenStack from your own infrastructure
  • Installation configuration parameters for OpenStack
  • Installing a cluster on Oracle Cloud Infrastructure by using the Assisted Installer
  • Installing a cluster on Oracle Cloud Infrastructure by using the Agent-based Installer
  • Installation methods
  • vSphere installation requirements
  • Preparing to install a cluster
  • Installing a cluster
  • Installing a cluster with customizations
  • Installing a cluster with network customizations
  • Installing a cluster in a restricted network
  • Assisted Installer
  • Agent-based Installer
  • Installing a three-node cluster
  • Uninstalling a cluster
  • Using the vSphere Problem Detector Operator
  • Installation configuration parameters
  • Installing a cluster on any platform
  • Customizing nodes
  • Configuring your firewall
  • Enabling Linux control group version 1 (cgroup v1)
  • Validating an installation
  • Troubleshooting installation issues
  • Support for FIPS cryptography
  • Postinstallation configuration overview
  • Configuring a private cluster
  • Bare metal configuration
  • About clusters with multi-architecture compute machines
  • Creating a cluster with multi-architecture compute machines on Azure
  • Creating a cluster with multi-architecture compute machines on AWS
  • Creating a cluster with multi-architecture compute machines on GCP
  • Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z
  • Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with z/VM
  • Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with RHEL KVM
  • Creating a cluster with multi-architecture compute machines on IBM Power
  • Managing your cluster with multi-architecture compute machines
  • Enabling encryption on a vSphere cluster
  • Configuring the vSphere connection settings after an installation
  • Machine configuration tasks
  • Cluster tasks
  • Network configuration
  • Storage configuration
  • Preparing for users
  • Configuring alert notifications
  • Converting a connected cluster to a disconnected cluster
  • Enabling cluster capabilities
  • Configuring additional devices in an IBM Z or IBM LinuxONE environment
  • Regions and zones for a VMware vCenter
  • Red Hat Enterprise Linux CoreOS image layering
  • Adding failure domains to an existing Nutanix cluster
  • AWS Local Zone or Wavelength Zone tasks
  • Extending an AWS VPC cluster into an AWS Outpost
  • Introduction to OpenShift updates
  • How cluster updates work
  • Understanding update channels and releases
  • Understanding OpenShift update duration
  • Preparing to update to OpenShift Container Platform 4.15
  • Preparing to update a cluster with manually maintained credentials
  • Preflight validation for Kernel Module Management (KMM) Modules
  • Updating a cluster using the CLI
  • Updating a cluster using the web console
  • Performing an EUS-to-EUS update
  • Performing a canary rollout update
  • Updating a cluster that includes RHEL compute machines
  • About cluster updates in a disconnected environment
  • Mirroring OpenShift Container Platform images
  • Updating a cluster in a disconnected environment using OSUS
  • Updating a cluster in a disconnected environment without OSUS
  • Uninstalling OSUS from a cluster
  • Updating hardware on nodes running on vSphere
  • Migrating to a cluster with multi-architecture compute machines
  • Updating hosted control planes
  • Updating the boot loader on Red Hat Enterprise Linux CoreOS nodes using bootupd
  • Gathering data about your cluster update
  • Support overview
  • Managing your cluster resources
  • Getting support
  • About remote health monitoring
  • Showing data collected by remote health monitoring
  • Opting out of remote health reporting
  • Enabling remote health reporting
  • Using Insights to identify issues with your cluster
  • Using the Insights Operator
  • Using remote health reporting in a restricted network
  • Importing simple content access entitlements with Insights Operator
  • Gathering data about your cluster
  • Summarizing cluster specifications
  • Troubleshooting installations
  • Verifying node health
  • Troubleshooting CRI-O container runtime issues
  • Troubleshooting operating system issues
  • Troubleshooting network issues
  • Troubleshooting Operator issues
  • Investigating pod issues
  • Troubleshooting the Source-to-Image process
  • Troubleshooting storage issues
  • Troubleshooting Windows container workload issues
  • Investigating monitoring issues
  • Diagnosing OpenShift CLI (oc) issues
  • Web console overview
  • Accessing the web console
  • Using the OpenShift Container Platform dashboard to get cluster information
  • Adding user preferences
  • Configuring the web console
  • Customizing the web console
  • Overview of dynamic plugins
  • Getting started with dynamic plugins
  • Deploy your plugin on a cluster
  • Dynamic plugin example
  • Dynamic plugin reference
  • Installing the web terminal
  • Configuring the web terminal
  • Using the web terminal
  • Troubleshooting the web terminal
  • Uninstalling the web terminal
  • Disabling the web console
  • Creating quick start tutorials
  • Optional capabilities and products
  • CLI tools overview
  • Getting started with the OpenShift CLI
  • Configuring the OpenShift CLI
  • Usage of oc and kubectl commands
  • Managing CLI profiles
  • Extending the OpenShift CLI with plugins
  • Managing CLI plugins with Krew
  • OpenShift CLI developer command reference
  • OpenShift CLI administrator command reference
  • Developer CLI (odo)
  • Knative CLI (kn) for use with OpenShift Serverless
  • Installing tkn
  • Configuring tkn
  • Basic tkn commands
  • GitOps CLI (argocd) for use with OpenShift GitOps
  • Installing the opm CLI
  • opm CLI reference
  • Installing the Operator SDK CLI
  • Operator SDK CLI reference
  • Security and compliance overview
  • Understanding container security
  • Understanding host and VM security
  • Hardening Red Hat Enterprise Linux CoreOS
  • Container image signatures
  • Understanding compliance
  • Securing container content
  • Using container registries securely
  • Securing the build process
  • Deploying containers
  • Securing the container platform
  • Securing networks
  • Securing attached storage
  • Monitoring cluster events and logs
  • Replacing the default ingress certificate
  • Adding API server certificates
  • Securing service traffic using service serving certificates
  • Updating the CA bundle
  • User-provided certificates for the API server
  • Proxy certificates
  • Service CA certificates
  • Node certificates
  • Bootstrap certificates
  • etcd certificates
  • OLM certificates
  • Aggregated API client certificates
  • Machine Config Operator certificates
  • User-provided certificates for default ingress
  • Ingress certificates
  • Monitoring and cluster logging Operator component certificates
  • Control plane certificates
  • Compliance Operator overview
  • Compliance Operator release notes
  • Understanding the Compliance Operator
  • Understanding the Custom Resource Definitions
  • Installing the Compliance Operator
  • Updating the Compliance Operator
  • Managing the Compliance Operator
  • Uninstalling the Compliance Operator
  • Supported compliance profiles
  • Compliance Operator scans
  • Tailoring the Compliance Operator
  • Retrieving Compliance Operator raw results
  • Managing Compliance Operator remediation
  • Performing advanced Compliance Operator tasks
  • Troubleshooting the Compliance Operator
  • Using the oc-compliance plugin
  • File Integrity Operator release notes
  • Installing the File Integrity Operator
  • Updating the File Integrity Operator
  • Understanding the File Integrity Operator
  • Configuring the File Integrity Operator
  • Performing advanced File Integrity Operator tasks
  • Troubleshooting the File Integrity Operator
  • Security Profiles Operator overview
  • Security Profiles Operator release notes
  • Understanding the Security Profiles Operator
  • Enabling the Security Profiles Operator
  • Managing seccomp profiles
  • Managing SELinux profiles
  • Advanced Security Profiles Operator tasks
  • Troubleshooting the Security Profiles Operator
  • Uninstalling the Security Profiles Operator
  • NBDE Tang Server Operator overview
  • NBDE Tang Server Operator release notes
  • Understanding the NBDE Tang Server Operator
  • Installing the NBDE Tang Server Operator
  • Configuring and managing Tang servers using the NBDE Tang Server Operator
  • Identifying URL of a Tang server deployed with the NBDE Tang Server Operator
  • cert-manager Operator for Red Hat OpenShift overview
  • cert-manager Operator for Red Hat OpenShift release notes
  • Installing the cert-manager Operator for Red Hat OpenShift
  • Configuring the egress proxy
  • Customizing cert-manager by using the cert-manager Operator API fields
  • Authenticating the cert-manager Operator for Red Hat OpenShift
  • Configuring an ACME issuer
  • Configuring certificates with an issuer
  • Monitoring the cert-manager Operator for Red Hat OpenShift
  • Configuring log levels for cert-manager and the cert-manager Operator for Red Hat OpenShift
  • Uninstalling the cert-manager Operator for Red Hat OpenShift
  • Viewing audit logs
  • Configuring the audit log policy
  • Configuring TLS security profiles
  • Configuring seccomp profiles
  • Allowing JavaScript-based access to the API server from additional hosts
  • Encrypting etcd data
  • Scanning pods for vulnerabilities
  • About disk encryption technology
  • Tang server installation considerations
  • Tang server encryption key management
  • Disaster recovery considerations
  • Authentication and authorization overview
  • Understanding authentication
  • Configuring the internal OAuth server
  • Configuring OAuth clients
  • Managing user-owned OAuth access tokens
  • Understanding identity provider configuration
  • Configuring an htpasswd identity provider
  • Configuring a Keystone identity provider
  • Configuring an LDAP identity provider
  • Configuring a basic authentication identity provider
  • Configuring a request header identity provider
  • Configuring a GitHub or GitHub Enterprise identity provider
  • Configuring a GitLab identity provider
  • Configuring a Google identity provider
  • Configuring an OpenID Connect identity provider
  • Using RBAC to define and apply permissions
  • Removing the kubeadmin user
  • Understanding and creating service accounts
  • Using service accounts in applications
  • Using a service account as an OAuth client
  • Scoping tokens
  • Using bound service account tokens
  • Managing security context constraints
  • Understanding and managing pod security admission
  • Impersonating the system:admin user
  • Syncing LDAP groups
  • About the Cloud Credential Operator
  • Passthrough mode
  • Manual mode with long-term credentials for components
  • Manual mode with short-term credentials for components
  • About networking
  • Understanding networking
  • Zero trust networking
  • Accessing hosts
  • Networking Operators overview
  • Networking dashboards
  • Understanding the Cluster Network Operator
  • Understanding the DNS Operator
  • Understanding the Ingress Operator
  • Ingress sharding
  • Understanding the Ingress Node Firewall Operator
  • Configuring the Ingress Controller for manual DNS management
  • Configuring the Ingress Controller endpoint publishing strategy
  • Verifying connectivity to an endpoint
  • Changing the cluster network MTU
  • Configuring the node port service range
  • Configuring the cluster network IP address range
  • Configuring IP failover
  • Configuring system controls and interface attributes using the tuning plugin
  • About PTP in OpenShift clusters
  • Configuring PTP hardware
  • Using PTP events
  • Developing PTP events consumer applications
  • External DNS Operator release notes
  • Understanding the External DNS Operator
  • Installing the External DNS Operator
  • External DNS Operator configuration parameters
  • Creating DNS records on a public hosted zone for AWS
  • Creating DNS records on a public zone for Azure
  • Creating DNS records on a public managed zone for GCP
  • Creating DNS records on a public DNS zone for Infoblox
  • Configuring the cluster-wide proxy on the External DNS Operator
  • About network policy
  • Creating a network policy
  • Viewing a network policy
  • Editing a network policy
  • Deleting a network policy
  • Defining a default network policy for projects
  • Configuring multitenant isolation with network policy
  • CIDR range definitions
  • AWS Load Balancer Operator release notes
  • Understanding the AWS Load Balancer Operator
  • Installing the AWS Load Balancer Operator
  • Preparing for the AWS Load Balancer Operator on a cluster using the AWS Security Token Service (STS)
  • Creating an instance of the AWS Load Balancer Controller
  • Serving multiple ingress resources through a single AWS Load Balancer
  • Adding TLS termination on the AWS Load Balancer
  • Configuring cluster-wide proxy on the AWS Load Balancer Operator
  • Understanding multiple networks
  • Configuring an additional network
  • About virtual routing and forwarding
  • Configuring multi-network policy
  • Attaching a pod to an additional network
  • Removing a pod from an additional network
  • Editing an additional network
  • Removing an additional network
  • Assigning a secondary network to a VRF
  • About Single Root I/O Virtualization (SR-IOV) hardware networks
  • Installing the SR-IOV Operator
  • Configuring the SR-IOV Operator
  • Configuring an SR-IOV network device
  • Configuring an SR-IOV Ethernet network attachment
  • Configuring an SR-IOV InfiniBand network attachment
  • Adding a pod to an SR-IOV network
  • Configuring interface-level network sysctl settings and all-multicast mode for SR-IOV networks
  • Using high performance multicast
  • Using DPDK and RDMA
  • Using pod-level bonding for secondary networks
  • Configuring hardware offloading
  • Switching Bluefield-2 from NIC to DPU mode
  • Uninstalling the SR-IOV Operator
  • About the OVN-Kubernetes network plugin
  • OVN-Kubernetes architecture
  • OVN-Kubernetes troubleshooting
  • OVN-Kubernetes network policy
  • OVN-Kubernetes traffic tracing
  • Migrating from the OpenShift SDN network plugin
  • Rolling back to the OpenShift SDN network plugin
  • Converting to IPv4/IPv6 dual stack networking
  • Logging for egress firewall and network policy rules
  • Configuring IPsec encryption
  • Configure an external gateway on the default network
  • Configuring an egress firewall for a project
  • Viewing an egress firewall for a project
  • Editing an egress firewall for a project
  • Removing an egress firewall from a project
  • Configuring an egress IP address
  • Assigning an egress IP address
  • Configuring an egress service
  • Considerations for the use of an egress router pod
  • Deploying an egress router pod in redirect mode
  • Enabling multicast for a project
  • Disabling multicast for a project
  • Tracking network flows
  • Configuring hybrid networking
  • About the OpenShift SDN network plugin
  • Configuring egress IPs for a project
  • Deploying an egress router pod in HTTP proxy mode
  • Deploying an egress router pod in DNS proxy mode
  • Configuring an egress router pod destination list from a config map
  • Configuring multitenant isolation
  • Configuring kube-proxy
  • Route configuration
  • Secured routes
  • Configuring ExternalIPs for services
  • Configuring ingress cluster traffic using an Ingress Controller
  • Configuring ingress cluster traffic using a load balancer
  • Configuring ingress cluster traffic on AWS
  • Configuring ingress cluster traffic using a service external IP
  • Configuring ingress cluster traffic using a NodePort
  • Configuring ingress cluster traffic using load balancer allowed source ranges
  • About the Kubernetes NMState Operator
  • Observing and updating node network state and configuration
  • Troubleshooting node network configuration
  • Configuring the cluster-wide proxy
  • Configuring a custom PKI
  • Load balancing on OpenStack
  • About MetalLB and the MetalLB Operator
  • Installing the MetalLB Operator
  • Upgrading the MetalLB Operator
  • Configuring MetalLB address pools
  • Advertising the IP address pools
  • Configuring MetalLB BGP peers
  • Advertising an IP address pool using the community alias
  • Configuring MetalLB BFD profiles
  • Configuring services to use MetalLB
  • Managing symmetric routing with MetalLB
  • MetalLB logging, troubleshooting, and support
  • Associating secondary interfaces metrics to network attachments
  • Storage overview
  • Understanding ephemeral storage
  • Understanding persistent storage
  • Persistent storage using AWS Elastic Block Store
  • Persistent storage using Azure Disk
  • Persistent storage using Azure File
  • Persistent storage using Cinder
  • Persistent storage using Fibre Channel
  • Persistent storage using FlexVolume
  • Persistent storage using GCE Persistent Disk
  • Persistent Storage using iSCSI
  • Persistent storage using NFS
  • Persistent storage using Red Hat OpenShift Data Foundation
  • Persistent storage using VMware vSphere
  • Local storage overview
  • Persistent storage using local volumes
  • Persistent storage using hostPath
  • Troubleshooting local persistent storage using LVMS
  • Configuring CSI volumes
  • CSI inline ephemeral volumes
  • Shared Resource CSI Driver Operator
  • CSI volume snapshots
  • CSI volume cloning
  • Managing the default storage class
  • CSI automatic migration
  • Detach CSI volumes after non-graceful node shutdown
  • AliCloud Disk CSI Driver Operator
  • AWS Elastic Block Store CSI Driver Operator
  • AWS Elastic File Service CSI Driver Operator
  • Azure Disk CSI Driver Operator
  • Azure File CSI Driver Operator
  • Azure Stack Hub CSI Driver Operator
  • GCP PD CSI Driver Operator
  • GCP Filestore CSI Driver Operator
  • IBM VPC Block CSI Driver Operator
  • IBM Power Virtual Server Block CSI Driver Operator
  • OpenStack Cinder CSI Driver Operator
  • OpenStack Manila CSI Driver Operator
  • Secrets Store CSI Driver Operator
  • VMware vSphere CSI Driver Operator
  • Generic ephemeral volumes
  • Expanding persistent volumes
  • Dynamic provisioning
  • Registry overview
  • Image Registry Operator in OpenShift Container Platform
  • Configuring the registry for AWS user-provisioned infrastructure
  • Configuring the registry for GCP user-provisioned infrastructure
  • Configuring the registry for OpenStack user-provisioned infrastructure
  • Configuring the registry for Azure user-provisioned infrastructure
  • Configuring the registry for OpenStack
  • Configuring the registry for bare metal
  • Configuring the registry for vSphere
  • Configuring the registry for OpenShift Data Foundation
  • Configuring the registry for Nutanix
  • Accessing the registry
  • Exposing the registry
  • Operators overview
  • What are Operators?
  • Packaging format
  • Common terms
  • Concepts and resources
  • Architecture
  • Dependency resolution
  • Operator groups
  • Multitenancy and Operator colocation
  • Operator conditions
  • OperatorHub
  • Red Hat-provided Operator catalogs
  • Operators in multitenant clusters
  • Extending the Kubernetes API with CRDs
  • Managing resources from CRDs
  • Creating applications from installed Operators
  • Installing Operators in your namespace
  • Adding Operators to a cluster
  • Updating installed Operators
  • Deleting Operators from a cluster
  • Configuring OLM features
  • Configuring proxy support
  • Viewing Operator status
  • Managing Operator conditions
  • Allowing non-cluster administrators to install Operators
  • Managing custom catalogs
  • Using OLM on restricted networks
  • Catalog source pod scheduling
  • Managing platform Operators
  • About the Operator SDK
  • Getting started
  • Project layout
  • Updating Go-based projects
  • Updating Ansible-based projects
  • Ansible support
  • Kubernetes Collection for Ansible
  • Using Ansible inside an Operator
  • Custom resource status management
  • Updating Helm-based projects
  • Helm support
  • Hybrid Helm Operator
  • Updating Hybrid Helm-based projects
  • Updating Java-based projects
  • Defining cluster service versions (CSVs)
  • Working with bundle images
  • Complying with pod security admission
  • Token authentication for Operators on cloud providers
  • CCO-based workflow for OLM-managed Operators with AWS STS
  • CCO-based workflow for OLM-managed Operators with Microsoft Entra Workload ID
  • Validating Operators using the scorecard
  • Validating Operator bundles
  • High-availability or single-node cluster detection and support
  • Configuring built-in monitoring with Prometheus
  • Configuring leader election
  • Configuring support for multiple platforms
  • Object pruning utility
  • Migrating package manifest projects to bundle format
  • Cluster Operators reference
  • About OLM 1.0
  • Components overview
  • Operator Controller
  • Installing an Operator from a catalog
  • Managing plain bundles
  • CI/CD overview
  • Overview of Builds
  • Understanding image builds
  • Understanding build configurations
  • Creating build inputs
  • Managing build output
  • Using build strategies
  • Custom image builds with Buildah
  • Performing and configuring basic builds
  • Triggering and modifying builds
  • Performing advanced builds
  • Using Red Hat subscriptions in builds
  • Securing builds by strategy
  • Build configuration resources
  • Troubleshooting builds
  • Setting up additional trusted certificate authorities for builds
  • About OpenShift Pipelines
  • About OpenShift GitOps
  • Configuring Jenkins images
  • Jenkins agent
  • Migrating from Jenkins to OpenShift Pipelines
  • Important changes to OpenShift Jenkins images
  • Overview of images
  • Configuring the Cluster Samples Operator
  • Using the Cluster Samples Operator with an alternate registry
  • Creating images
  • Managing images overview
  • Tagging images
  • Image pull policy
  • Using image pull secrets
  • Managing image streams
  • Using image streams with Kubernetes resources
  • Triggering updates on image stream changes
  • Image configuration resources
  • Using templates
  • Using Ruby on Rails
  • Using images overview
  • Source-to-image
  • Customizing source-to-image images
  • Building applications overview
  • Working with projects
  • Creating a project as another user
  • Configuring project creation
  • Creating applications using the Developer perspective
  • Creating applications by using the CLI
  • Viewing application composition by using the Topology view
  • Exporting applications
  • Service Binding Operator release notes
  • Understanding Service Binding Operator
  • Installing Service Binding Operator
  • Getting started with service binding
  • Getting started with service binding on IBM Power, IBM Z, and IBM LinuxONE
  • Exposing binding data from a service
  • Projecting binding data
  • Binding workloads using Service Binding Operator
  • Connecting an application to a service using the Developer perspective
  • Understanding Helm
  • Installing Helm
  • Configuring custom Helm chart repositories
  • Working with Helm releases
  • Understanding deployments
  • Managing deployment processes
  • Using deployment strategies
  • Using route-based deployment strategies
  • Resource quotas per project
  • Resource quotas across multiple projects
  • Using config maps with applications
  • Monitoring project and application metrics using the Developer perspective
  • Monitoring application health
  • Editing applications
  • Pruning objects to reclaim resources
  • Idling applications
  • Deleting applications
  • Using the Red Hat Marketplace
  • Serverless overview
  • Overview of machine management
  • Creating a compute machine set on Alibaba Cloud
  • Creating a compute machine set on AWS
  • Creating a compute machine set on Azure
  • Creating a compute machine set on Azure Stack Hub
  • Creating a compute machine set on GCP
  • Creating a compute machine set on IBM Cloud
  • Creating a compute machine set on IBM Power Virtual Server
  • Creating a compute machine set on Nutanix
  • Creating a compute machine set on OpenStack
  • Creating a compute machine set on vSphere
  • Creating a compute machine set on bare metal
  • Manually scaling a compute machine set
  • Modifying a compute machine set
  • Machine phases and lifecycle
  • Deleting a machine
  • Applying autoscaling to a cluster
  • Creating infrastructure machine sets
  • Adding a RHEL compute machine
  • Adding more RHEL compute machines
  • Adding compute machines to clusters with user-provisioned infrastructure manually
  • Adding compute machines to AWS using CloudFormation templates
  • Adding compute machines to vSphere manually
  • Adding compute machines to bare metal
  • About control plane machine sets
  • Getting started with control plane machine sets
  • Managing control plane machines with control plane machine sets
  • Control plane machine set configuration
  • Control plane configuration options for Amazon Web Services
  • Control plane configuration options for Microsoft Azure
  • Control plane configuration options for Google Cloud Platform
  • Control plane configuration options for Nutanix
  • Control plane configuration options for Red Hat OpenStack
  • Control plane configuration options for VMware vSphere
  • Control plane resiliency and recovery
  • Troubleshooting the control plane machine set
  • Disabling the control plane machine set
  • About the Cluster API
  • Getting started with the Cluster API
  • Managing machines with the Cluster API
  • Cluster API configuration
  • Cluster API configuration options for Amazon Web Services
  • Cluster API configuration options for Google Cloud Platform
  • Troubleshooting Cluster API clusters
  • Deploying machine health checks
  • Hosted control planes overview
  • Getting started with hosted control planes
  • Authentication and authorization for hosted control planes
  • Using feature gates in a hosted cluster
  • Hosted control planes Observability
  • Recovering a failing etcd cluster
  • Backing up and restoring etcd in an on-premise environment
  • Backing up and restoring etcd on AWS
  • Disaster recovery for a hosted cluster in AWS
  • Troubleshooting hosted control planes
  • Overview of nodes
  • Viewing pods
  • Configuring a cluster for pods
  • Automatically scaling pods with the horizontal pod autoscaler
  • Automatically adjust pod resource levels with the vertical pod autoscaler
  • Providing sensitive data to pods by using secrets
  • Providing sensitive data to pods by using an external secrets store
  • Creating and using config maps
  • Using Device Manager to make devices available to nodes
  • Including pod priority in pod scheduling decisions
  • Placing pods on specific nodes using node selectors
  • Run Once Duration Override Operator overview
  • Run Once Duration Override Operator release notes
  • Overriding the active deadline for run-once pods
  • Uninstalling the Run Once Duration Override Operator
  • Custom Metrics Autoscaler Operator release notes
  • Past releases
  • Custom Metrics Autoscaler Operator overview
  • Installing the custom metrics autoscaler
  • Understanding the custom metrics autoscaler triggers
  • Understanding custom metrics autoscaler trigger authentications
  • Pausing the custom metrics autoscaler
  • Gathering audit logs
  • Gathering debugging data
  • Viewing Operator metrics
  • Understanding how to add custom metrics autoscalers
  • Removing the Custom Metrics Autoscaler Operator
  • About pod placement using the scheduler
  • Scheduling pods using a scheduler profile
  • Placing pods relative to other pods using pod affinity and anti-affinity rules
  • Controlling pod placement on nodes using node affinity rules
  • Placing pods onto overcommited nodes
  • Controlling pod placement using node taints
  • Controlling pod placement using pod topology spread constraints
  • Descheduler overview
  • Descheduler release notes
  • Evicting pods using the descheduler
  • Uninstalling the descheduler
  • Secondary scheduler overview
  • Secondary Scheduler Operator release notes
  • Scheduling pods using a secondary scheduler
  • Uninstalling the Secondary Scheduler Operator
  • Running background tasks on nodes automatically with daemonsets
  • Running tasks in pods using jobs
  • Viewing and listing the nodes in your cluster
  • Working with nodes
  • Managing nodes
  • Managing the maximum number of pods per node
  • Using the Node Tuning Operator
  • Remediating, fencing, and maintaining nodes
  • Understanding node rebooting
  • Freeing node resources using garbage collection
  • Allocating resources for nodes
  • Allocating specific CPUs for nodes in a cluster
  • Enabling TLS security profiles for the kubelet
  • Machine Config Daemon metrics
  • Creating infrastructure nodes
  • Understanding containers
  • Using Init Containers to perform tasks before a pod is deployed
  • Using volumes to persist container data
  • Mapping volumes using projected volumes
  • Allowing containers to consume API objects
  • Copying files to or from a container
  • Executing remote commands in a container
  • Using port forwarding to access applications in a container
  • Using sysctls in containers
  • Accessing faster builds with /dev/fuse
  • Viewing system event information in a cluster
  • Analyzing cluster resource levels
  • Setting limit ranges
  • Configuring cluster memory to meet container memory and risk requirements
  • Configuring your cluster to place pods on overcommited nodes
  • Configuring the Linux cgroup version on your nodes
  • Enabling features using FeatureGates
  • Improving cluster stability in high latency environments using worker latency profiles
  • Using remote worker node at the network edge

Adding worker nodes to single-node OpenShift clusters

  • Node metrics dashboard
  • Red Hat OpenShift support for Windows Containers overview
  • Red Hat OpenShift support for Windows Containers release notes
  • Understanding Windows container workloads
  • Enabling Windows container workloads
  • Creating a Windows machine set on AWS
  • Creating a Windows machine set on Azure
  • Creating a Windows machine set on GCP
  • Creating a Windows machine set on Nutanix
  • Creating a Windows machine set on vSphere
  • Scheduling Windows container workloads
  • Windows node upgrades
  • Using Bring-Your-Own-Host Windows instances as nodes
  • Removing Windows nodes
  • Disabling Windows container workloads
  • Documentation moved
  • Observability overview
  • Monitoring overview
  • Configuring the monitoring stack
  • Enabling monitoring for user-defined projects
  • Enabling alert routing for user-defined projects
  • Managing metrics
  • Managing alerts
  • Reviewing monitoring dashboards
  • Accessing monitoring APIs by using the CLI
  • Troubleshooting monitoring issues
  • Config map reference for the Cluster Monitoring Operator
  • Logging 5.9
  • Logging 5.8
  • Logging 5.7
  • Viewing Logging status
  • Troubleshooting log forwarding
  • Troubleshooting logging alerts
  • Viewing the status of the Elasticsearch log store
  • About Logging
  • Installing Logging
  • Updating Logging
  • About log visualization
  • Log visualization with the web console
  • Viewing cluster dashboards
  • Log visualization with Kibana
  • Configuring CPU and memory limits for Logging components
  • Configuring systemd-journald for Logging
  • About log collection and forwarding
  • Log output types
  • Enabling JSON log forwarding
  • Configuring log forwarding
  • Configuring the logging collector
  • Collecting and storing Kubernetes events
  • About log storage
  • Installing log storage
  • Configuring the LokiStack log store
  • Configuring the Elasticsearch log store
  • Default logging alerts
  • Custom logging alerts
  • Flow control mechanisms
  • Filtering logs by content
  • Filtering logs by metadata
  • Using node selectors to move logging resources
  • Using tolerations to control logging pod placement
  • Uninstalling Logging
  • Exported fields
  • 5.6 Logging API reference
  • Distributed tracing 3.1.1
  • Distributed tracing architecture
  • Configuring
  • Red Hat build of OpenTelemetry 3.1.1
  • Configuring the Collector
  • Configuring the instrumentation
  • Sending traces and metrics to the Collector
  • Sending metrics to the monitoring stack
  • Forwarding traces to a TempoStack
  • Configuring the Collector metrics
  • Gathering the observability data from multiple clusters
  • Network Observability release notes
  • Network Observability overview
  • Installing the Network Observability Operator
  • Understanding Network Observability Operator
  • Configuring the Network Observability Operator
  • Network Policy
  • Observing the network traffic
  • Using metrics with dashboards and alerts
  • Monitoring the Network Observability Operator
  • API reference
  • JSON flows format reference
  • Troubleshooting Network Observability
  • Power monitoring release notes
  • Power monitoring overview
  • Installing power monitoring
  • Configuring power monitoring
  • Visualizing power monitoring metrics
  • Uninstalling power monitoring
  • Cluster Observability Operator release notes
  • Cluster Observability Operator overview
  • Installing the Cluster Observability Operator
  • Configuring the Cluster Observability Operator to monitor a service
  • Recommended control plane practices
  • Recommended infrastructure practices
  • Recommended etcd practices
  • Planning your environment according to object maximums
  • Recommended host practices for IBM Z & IBM LinuxONE environments
  • Using CPU Manager and Topology Manager
  • Scheduling NUMA-aware workloads
  • Optimizing storage
  • Optimizing routing
  • Optimizing networking
  • Optimizing CPU usage
  • Managing bare metal hosts
  • Monitoring bare-metal events
  • What huge pages do and how they are consumed by apps
  • Understanding low latency
  • Tuning nodes for low latency with the performance profile
  • Provisioning real-time and low latency workloads
  • Debugging low latency tuning
  • Performing latency tests for platform verification
  • Workload partitioning
  • Using the Node Observability Operator
  • Challenges of the network far edge
  • Preparing the hub cluster for ZTP
  • Updating GitOps ZTP
  • Installing managed clusters with RHACM and SiteConfig resources
  • Configuring managed clusters with policies and PolicyGenTemplate resources
  • Manually installing a single-node OpenShift cluster with ZTP
  • Recommended single-node OpenShift cluster configuration for vDU application workloads
  • Validating cluster tuning for vDU application workloads
  • Advanced managed cluster configuration with SiteConfig resources
  • Advanced managed cluster configuration with PolicyGenTemplate resources
  • Updating managed clusters with the Topology Aware Lifecycle Manager
  • Updating managed clusters in a disconnected environment with the Topology Aware Lifecycle Manager
  • Expanding single-node OpenShift clusters with GitOps ZTP
  • Pre-caching images for single-node OpenShift deployments
  • About specialized hardware and driver enablement
  • Driver Toolkit
  • Node Feature Discovery Operator
  • Kernel Module Management Operator
  • Overview of backup and restore operations
  • Shutting down a cluster gracefully
  • Restarting a cluster gracefully
  • Introduction to OpenShift API for Data Protection
  • OADP 1.3 release notes
  • OADP 1.2 release notes
  • OADP 1.1 release notes
  • OADP features and plugins
  • About installing OADP
  • Installing the OADP Operator
  • Configuring OADP with AWS S3 compatible storage
  • Configuring OADP with Azure
  • Configuring OADP with GCP
  • Configuring OADP with MCG
  • Configuring OADP with ODF
  • Configuring OADP with OpenShift Virtualization
  • Uninstalling OADP
  • Backing up applications
  • Creating a Backup CR
  • Backing up persistent volumes with CSI snapshots
  • Backing up applications with File System Backup
  • Creating backup hooks
  • Scheduling backups using Schedule CR
  • Deleting backups
  • About Kopia
  • Restoring applications
  • Backing up applications on ROSA STS using OADP
  • Backing up applications on AWS STS using OADP
  • Introduction to OADP Data Mover
  • Using Data Mover for CSI snapshots
  • Using OADP 1.2 Data Mover with Ceph storage
  • About the OADP 1.3 Data Mover
  • Backing up and restoring volumes by using CSI snapshots data movement
  • Advanced OADP features and functionalities
  • Backing up etcd data
  • Replacing an unhealthy etcd member
  • About disaster recovery
  • Restoring to a previous cluster state
  • Recovering from expired control plane certificates
  • Migrating from version 3 to 4 overview
  • About migrating from OpenShift Container Platform 3 to 4
  • Differences between OpenShift Container Platform 3 and 4
  • Network considerations
  • Installing MTC
  • Installing MTC in a restricted network environment
  • Upgrading MTC
  • Premigration checklists
  • Migrating your applications
  • Advanced migration options
  • MTC release notes 1.8
  • MTC release notes 1.7
  • MTC release notes 1.6
  • MTC release notes 1.5
  • Direct Migration Requirements
  • Understanding API tiers
  • API compatibility guidelines
  • Editing kubelet log level verbosity and gathering logs
  • About Authorization APIs
  • LocalResourceAccessReview [authorization.openshift.io/v1]
  • LocalSubjectAccessReview [authorization.openshift.io/v1]
  • ResourceAccessReview [authorization.openshift.io/v1]
  • SelfSubjectRulesReview [authorization.openshift.io/v1]
  • SubjectAccessReview [authorization.openshift.io/v1]
  • SubjectRulesReview [authorization.openshift.io/v1]
  • SelfSubjectReview [authentication.k8s.io/v1]
  • TokenRequest [authentication.k8s.io/v1]
  • TokenReview [authentication.k8s.io/v1]
  • LocalSubjectAccessReview [authorization.k8s.io/v1]
  • SelfSubjectAccessReview [authorization.k8s.io/v1]
  • SelfSubjectRulesReview [authorization.k8s.io/v1]
  • SubjectAccessReview [authorization.k8s.io/v1]
  • About Autoscale APIs
  • ClusterAutoscaler [autoscaling.openshift.io/v1]
  • MachineAutoscaler [autoscaling.openshift.io/v1beta1]
  • HorizontalPodAutoscaler [autoscaling/v2]
  • Scale [autoscaling/v1]
  • About Config APIs
  • APIServer [config.openshift.io/v1]
  • Authentication [config.openshift.io/v1]
  • Build [config.openshift.io/v1]
  • ClusterOperator [config.openshift.io/v1]
  • ClusterVersion [config.openshift.io/v1]
  • Console [config.openshift.io/v1]
  • DNS [config.openshift.io/v1]
  • FeatureGate [config.openshift.io/v1]
  • HelmChartRepository [helm.openshift.io/v1beta1]
  • Image [config.openshift.io/v1]
  • ImageDigestMirrorSet [config.openshift.io/v1]
  • ImageContentPolicy [config.openshift.io/v1]
  • ImageTagMirrorSet [config.openshift.io/v1]
  • Infrastructure [config.openshift.io/v1]
  • Ingress [config.openshift.io/v1]
  • Network [config.openshift.io/v1]
  • Node [config.openshift.io/v1]
  • OAuth [config.openshift.io/v1]
  • OperatorHub [config.openshift.io/v1]
  • Project [config.openshift.io/v1]
  • ProjectHelmChartRepository [helm.openshift.io/v1beta1]
  • Proxy [config.openshift.io/v1]
  • Scheduler [config.openshift.io/v1]
  • About Console APIs
  • ConsoleCLIDownload [console.openshift.io/v1]
  • ConsoleExternalLogLink [console.openshift.io/v1]
  • ConsoleLink [console.openshift.io/v1]
  • ConsoleNotification [console.openshift.io/v1]
  • ConsolePlugin [console.openshift.io/v1]
  • ConsoleQuickStart [console.openshift.io/v1]
  • ConsoleSample [console.openshift.io/v1]
  • ConsoleYAMLSample [console.openshift.io/v1]
  • About Extension APIs
  • APIService [apiregistration.k8s.io/v1]
  • CustomResourceDefinition [apiextensions.k8s.io/v1]
  • MutatingWebhookConfiguration [admissionregistration.k8s.io/v1]
  • ValidatingWebhookConfiguration [admissionregistration.k8s.io/v1]
  • About Image APIs
  • Image [image.openshift.io/v1]
  • ImageSignature [image.openshift.io/v1]
  • ImageStreamImage [image.openshift.io/v1]
  • ImageStreamImport [image.openshift.io/v1]
  • ImageStreamLayers [image.openshift.io/v1]
  • ImageStreamMapping [image.openshift.io/v1]
  • ImageStream [image.openshift.io/v1]
  • ImageStreamTag [image.openshift.io/v1]
  • ImageTag [image.openshift.io/v1]
  • SecretList [image.openshift.io/v1]
  • About Machine APIs
  • ContainerRuntimeConfig [machineconfiguration.openshift.io/v1]
  • ControllerConfig [machineconfiguration.openshift.io/v1]
  • ControlPlaneMachineSet [machine.openshift.io/v1]
  • KubeletConfig [machineconfiguration.openshift.io/v1]
  • MachineConfig [machineconfiguration.openshift.io/v1]
  • MachineConfigNode [machineconfiguration.openshift.io/v1alpha1]
  • MachineConfigPool [machineconfiguration.openshift.io/v1]
  • MachineHealthCheck [machine.openshift.io/v1beta1]
  • Machine [machine.openshift.io/v1beta1]
  • MachineSet [machine.openshift.io/v1beta1]
  • About Metadata APIs
  • APIRequestCount [apiserver.openshift.io/v1]
  • Binding [undefined/v1]
  • ComponentStatus [undefined/v1]
  • ConfigMap [undefined/v1]
  • ControllerRevision [apps/v1]
  • Event [events.k8s.io/v1]
  • Event [undefined/v1]
  • Lease [coordination.k8s.io/v1]
  • Namespace [undefined/v1]
  • About Monitoring APIs
  • Alertmanager [monitoring.coreos.com/v1]
  • AlertmanagerConfig [monitoring.coreos.com/v1beta1]
  • AlertRelabelConfig [monitoring.openshift.io/v1]
  • AlertingRule [monitoring.openshift.io/v1]
  • PodMonitor [monitoring.coreos.com/v1]
  • Probe [monitoring.coreos.com/v1]
  • Prometheus [monitoring.coreos.com/v1]
  • PrometheusRule [monitoring.coreos.com/v1]
  • ServiceMonitor [monitoring.coreos.com/v1]
  • ThanosRuler [monitoring.coreos.com/v1]
  • About Network APIs
  • AdminPolicyBasedExternalRoute [k8s.ovn.org/v1]
  • CloudPrivateIPConfig [cloud.network.openshift.io/v1]
  • EgressFirewall [k8s.ovn.org/v1]
  • EgressIP [k8s.ovn.org/v1]
  • EgressQoS [k8s.ovn.org/v1]
  • EgressService [k8s.ovn.org/v1]
  • Endpoints [undefined/v1]
  • EndpointSlice [discovery.k8s.io/v1]
  • EgressRouter [network.operator.openshift.io/v1]
  • Ingress [networking.k8s.io/v1]
  • IngressClass [networking.k8s.io/v1]
  • IPPool [whereabouts.cni.cncf.io/v1alpha1]
  • NetworkAttachmentDefinition [k8s.cni.cncf.io/v1]
  • NetworkPolicy [networking.k8s.io/v1]
  • OverlappingRangeIPReservation [whereabouts.cni.cncf.io/v1alpha1]
  • PodNetworkConnectivityCheck [controlplane.operator.openshift.io/v1alpha1]
  • Route [route.openshift.io/v1]
  • Service [undefined/v1]
  • About Node APIs
  • Node [undefined/v1]
  • PerformanceProfile [performance.openshift.io/v2]
  • Profile [tuned.openshift.io/v1]
  • RuntimeClass [node.k8s.io/v1]
  • Tuned [tuned.openshift.io/v1]
  • About OAuth APIs
  • OAuthAccessToken [oauth.openshift.io/v1]
  • OAuthAuthorizeToken [oauth.openshift.io/v1]
  • OAuthClientAuthorization [oauth.openshift.io/v1]
  • OAuthClient [oauth.openshift.io/v1]
  • UserOAuthAccessToken [oauth.openshift.io/v1]
  • About Operator APIs
  • Authentication [operator.openshift.io/v1]
  • CloudCredential [operator.openshift.io/v1]
  • ClusterCSIDriver [operator.openshift.io/v1]
  • Console [operator.openshift.io/v1]
  • Config [operator.openshift.io/v1]
  • Config [imageregistry.operator.openshift.io/v1]
  • Config [samples.operator.openshift.io/v1]
  • CSISnapshotController [operator.openshift.io/v1]
  • DNS [operator.openshift.io/v1]
  • DNSRecord [ingress.operator.openshift.io/v1]
  • Etcd [operator.openshift.io/v1]
  • ImageContentSourcePolicy [operator.openshift.io/v1alpha1]
  • ImagePruner [imageregistry.operator.openshift.io/v1]
  • IngressController [operator.openshift.io/v1]
  • InsightsOperator [operator.openshift.io/v1]
  • KubeAPIServer [operator.openshift.io/v1]
  • KubeControllerManager [operator.openshift.io/v1]
  • KubeScheduler [operator.openshift.io/v1]
  • KubeStorageVersionMigrator [operator.openshift.io/v1]
  • MachineConfiguration [operator.openshift.io/v1]
  • Network [operator.openshift.io/v1]
  • OpenShiftAPIServer [operator.openshift.io/v1]
  • OpenShiftControllerManager [operator.openshift.io/v1]
  • OperatorPKI [network.operator.openshift.io/v1]
  • ServiceCA [operator.openshift.io/v1]
  • Storage [operator.openshift.io/v1]
  • About OperatorHub APIs
  • CatalogSource [operators.coreos.com/v1alpha1]
  • ClusterServiceVersion [operators.coreos.com/v1alpha1]
  • InstallPlan [operators.coreos.com/v1alpha1]
  • OLMConfig [operators.coreos.com/v1]
  • Operator [operators.coreos.com/v1]
  • OperatorCondition [operators.coreos.com/v2]
  • OperatorGroup [operators.coreos.com/v1]
  • PackageManifest [packages.operators.coreos.com/v1]
  • Subscription [operators.coreos.com/v1alpha1]
  • About Policy APIs
  • Eviction [policy/v1]
  • PodDisruptionBudget [policy/v1]
  • About Project APIs
  • Project [project.openshift.io/v1]
  • ProjectRequest [project.openshift.io/v1]
  • About Provisioning APIs
  • BMCEventSubscription [metal3.io/v1alpha1]
  • BareMetalHost [metal3.io/v1alpha1]
  • FirmwareSchema [metal3.io/v1alpha1]
  • HardwareData [metal3.io/v1alpha1]
  • HostFirmwareSettings [metal3.io/v1alpha1]
  • Metal3Remediation [infrastructure.cluster.x-k8s.io/v1beta1]
  • Metal3RemediationTemplate [infrastructure.cluster.x-k8s.io/v1beta1]
  • PreprovisioningImage [metal3.io/v1alpha1]
  • Provisioning [metal3.io/v1alpha1]
  • About RBAC APIs
  • ClusterRoleBinding [rbac.authorization.k8s.io/v1]
  • ClusterRole [rbac.authorization.k8s.io/v1]
  • RoleBinding [rbac.authorization.k8s.io/v1]
  • Role [rbac.authorization.k8s.io/v1]
  • About Role APIs
  • ClusterRoleBinding [authorization.openshift.io/v1]
  • ClusterRole [authorization.openshift.io/v1]
  • RoleBindingRestriction [authorization.openshift.io/v1]
  • RoleBinding [authorization.openshift.io/v1]
  • Role [authorization.openshift.io/v1]
  • About Schedule and quota APIs
  • AppliedClusterResourceQuota [quota.openshift.io/v1]
  • ClusterResourceQuota [quota.openshift.io/v1]
  • FlowSchema [flowcontrol.apiserver.k8s.io/v1beta3]
  • LimitRange [undefined/v1]
  • PriorityClass [scheduling.k8s.io/v1]
  • PriorityLevelConfiguration [flowcontrol.apiserver.k8s.io/v1beta3]
  • ResourceQuota [undefined/v1]
  • About Security APIs
  • CertificateSigningRequest [certificates.k8s.io/v1]
  • CredentialsRequest [cloudcredential.openshift.io/v1]
  • PodSecurityPolicyReview [security.openshift.io/v1]
  • PodSecurityPolicySelfSubjectReview [security.openshift.io/v1]
  • PodSecurityPolicySubjectReview [security.openshift.io/v1]
  • RangeAllocation [security.openshift.io/v1]
  • Secret [undefined/v1]
  • SecurityContextConstraints [security.openshift.io/v1]
  • ServiceAccount [undefined/v1]
  • About Storage APIs
  • CSIDriver [storage.k8s.io/v1]
  • CSINode [storage.k8s.io/v1]
  • CSIStorageCapacity [storage.k8s.io/v1]
  • PersistentVolume [undefined/v1]
  • PersistentVolumeClaim [undefined/v1]
  • StorageClass [storage.k8s.io/v1]
  • StorageState [migration.k8s.io/v1alpha1]
  • StorageVersionMigration [migration.k8s.io/v1alpha1]
  • VolumeAttachment [storage.k8s.io/v1]
  • VolumeSnapshot [snapshot.storage.k8s.io/v1]
  • VolumeSnapshotClass [snapshot.storage.k8s.io/v1]
  • VolumeSnapshotContent [snapshot.storage.k8s.io/v1]
  • About Template APIs
  • BrokerTemplateInstance [template.openshift.io/v1]
  • PodTemplate [undefined/v1]
  • Template [template.openshift.io/v1]
  • TemplateInstance [template.openshift.io/v1]
  • About User and group APIs
  • Group [user.openshift.io/v1]
  • Identity [user.openshift.io/v1]
  • UserIdentityMapping [user.openshift.io/v1]
  • User [user.openshift.io/v1]
  • About Workloads APIs
  • BuildConfig [build.openshift.io/v1]
  • Build [build.openshift.io/v1]
  • BuildLog [build.openshift.io/v1]
  • BuildRequest [build.openshift.io/v1]
  • CronJob [batch/v1]
  • DaemonSet [apps/v1]
  • Deployment [apps/v1]
  • DeploymentConfig [apps.openshift.io/v1]
  • DeploymentConfigRollback [apps.openshift.io/v1]
  • DeploymentLog [apps.openshift.io/v1]
  • DeploymentRequest [apps.openshift.io/v1]
  • Job [batch/v1]
  • Pod [undefined/v1]
  • ReplicationController [undefined/v1]
  • ReplicaSet [apps/v1]
  • StatefulSet [apps/v1]
  • About OpenShift Service Mesh
  • Service Mesh 2.x release notes
  • Upgrading Service Mesh
  • Understanding Service Mesh
  • Service Mesh deployment models
  • Service Mesh and Istio differences
  • Preparing to install Service Mesh
  • Installing the Operators
  • Creating the ServiceMeshControlPlane
  • Adding services to a service mesh
  • Enabling sidecar injection
  • Managing users and profiles
  • Traffic management
  • Metrics, logs, and traces
  • Performance and scalability
  • Deploying to production
  • OpenShift Service Mesh Console plugin
  • 3scale WebAssembly for 2.1
  • 3scale Istio adapter for 2.0
  • Troubleshooting Service Mesh
  • Control plane configuration reference
  • Kiali configuration reference
  • Jaeger configuration reference
  • Uninstalling Service Mesh
  • Service Mesh 1.x release notes
  • Service Mesh architecture
  • Installing Service Mesh
  • Deploying applications on Service Mesh
  • Data visualization and observability
  • Custom resources
  • 3scale Istio adapter for 1.x
  • Removing Service Mesh
  • About OpenShift Virtualization
  • Security policies
  • OpenShift Virtualization release notes
  • Getting started with OpenShift Virtualization
  • virtctl and libguestfs
  • Preparing your cluster
  • Installing OpenShift Virtualization
  • Uninstalling OpenShift Virtualization
  • Node placement rules
  • Updating OpenShift Virtualization
  • Creating VMs from Red Hat images overview
  • Creating VMs from instance types
  • Creating VMs from templates
  • Creating VMs from the CLI
  • Creating VMs from custom images overview
  • Creating VMs by using container disks
  • Creating VMs by importing images from web pages
  • Creating VMs by uploading images
  • Installing the QEMU guest agent and VirtIO drivers
  • Cloning VMs
  • Creating VMs by cloning PVCs
  • Connecting to VM consoles
  • Configuring SSH access to VMs
  • Editing virtual machines
  • Editing boot order
  • Deleting virtual machines
  • Exporting virtual machines
  • Managing virtual machine instances
  • Controlling virtual machine states
  • Using virtual Trusted Platform Module devices
  • Managing virtual machines with OpenShift Pipelines
  • Working with resource quotas for virtual machines
  • Specifying nodes for virtual machines
  • Activating kernel samepage merging (KSM)
  • Configuring certificate rotation
  • Configuring the default CPU model
  • UEFI mode for virtual machines
  • Configuring PXE booting for virtual machines
  • Using huge pages with virtual machines
  • Enabling dedicated resources for a virtual machine
  • Scheduling virtual machines
  • Configuring PCI passthrough
  • Configuring virtual GPUs
  • Enabling descheduler evictions on virtual machines
  • About high availability for virtual machines
  • Control plane tuning
  • Assigning compute resources
  • Hot-plugging VM disks
  • Expanding VM disks
  • Configuring shared volumes
  • Networking configuration overview
  • Connecting a VM to the default pod network
  • Exposing a VM by using a service
  • Connecting a VM to a Linux bridge network
  • Connecting a VM to an SR-IOV network
  • Using DPDK with SR-IOV
  • Connecting a VM to an OVN-Kubernetes secondary network
  • Hot plugging secondary network interfaces
  • Connecting a VM to a service mesh
  • Configuring a dedicated network for live migration
  • Configuring and viewing IP addresses
  • Accessing a VM by using the cluster FQDN
  • Managing MAC address pools for network interfaces
  • Storage configuration overview
  • Configuring storage profiles
  • Managing automatic boot source updates
  • Reserving PVC space for file system overhead
  • Configuring local storage by using HPP
  • Enabling user permissions to clone data volumes across namespaces
  • Configuring CDI to override CPU and memory quotas
  • Preparing CDI scratch space
  • Using preallocation for data volumes
  • Managing data volume annotations
  • About live migration
  • Configuring live migration
  • Initiating and canceling live migration
  • Node maintenance
  • Managing node labeling for obsolete CPU models
  • Preventing node reconciliation
  • Deleting a failed node to trigger VM failover
  • Cluster checkup framework
  • Prometheus queries for virtual resources
  • Virtual machine custom metrics
  • Virtual machine health checks
  • Collecting data for Red Hat Support
  • Backup and restore by using VM snapshots
  • Installing and configuring OADP
  • Backing up and restoring virtual machines
  • Backing up virtual machines
  • Restoring virtual machines
  • Disaster recovery

Persistent storage using Logical Volume Manager Storage

Prerequisites to install lvm storage, installing lvm storage by using the cli, installing lvm storage by using the web console, installing lvm storage in a disconnected environment, installing lvm storage by using rhacm, limitations to configure the size of the devices used in lvm storage, about adding devices to a volume group.

Devices not supported by LVM Storage

Reusing a volume group from the previous LVM Storage installation

Integrating software RAID arrays with LVM Storage

Creating an LVMCluster CR by using the CLI

Creating an lvmcluster cr by using the web console, creating an lvmcluster cr by using rhacm, deleting an lvmcluster cr by using the cli, deleting an lvmcluster cr by using the web console, deleting an lvmcluster cr by using rhacm, provisioning storage, scaling up the storage of clusters by using the cli, scaling up the storage of clusters by using the web console, scaling up the storage of clusters by using rhacm, expanding a persistent volume claim, deleting a persistent volume claim, limitations for creating volume snapshots in multi-node topology, creating volume snapshots, restoring volume snapshots, deleting volume snapshots, limitations for creating volume clones in multi-node topology, creating volume clones, deleting volume clones, updating lvm storage, uninstalling lvm storage using the web console, uninstalling lvm storage installed using rhacm, downloading log files and diagnostic information using must-gather.

Logical Volume Manager Storage uses the TopoLVM CSI driver to dynamically provision local storage on the OpenShift Container Platform clusters.

LVM Storage creates thin-provisioned volumes using Logical Volume Manager and provides dynamic provisioning of block storage on a limited resources cluster.

You can create volume groups, persistent volume claims (PVCs), volume snapshots, and volume clones by using LVM Storage.

Logical Volume Manager Storage installation

You can install Logical Volume Manager (LVM) Storage on an OpenShift Container Platform cluster and configure it to dynamically provision storage for your workloads.

You can install LVM Storage by using the OpenShift Container Platform CLI ( oc ), OpenShift Container Platform web console, or Red Hat Advanced Cluster Management (RHACM).

The prerequisites to install LVM Storage are as follows:

Ensure that you have a minimum of 10 milliCPU and 100 MiB of RAM.

Ensure that every managed cluster has dedicated disks that are used to provision storage. LVM Storage uses only those disks that are empty and do not contain file system signatures. To ensure that the disks are empty and do not contain file system signatures, wipe the disks before using them.

Before installing LVM Storage in a private CI environment where you can reuse the storage devices that you configured in the previous LVM Storage installation, ensure that you have wiped the disks that are not in use. If you do not wipe the disks before installing LVM Storage, you cannot reuse the disks without manual intervention.

If you want to install LVM Storage by using Red Hat Advanced Cluster Management (RHACM), ensure that you have installed RHACM on an OpenShift Container Platform cluster. See the "Installing LVM Storage using RHACM" section.

Red Hat Advanced Cluster Management for Kubernetes: Installing while connected online

As a cluster administrator, you can install LVM Storage by using the OpenShift CLI.

You have installed the OpenShift CLI ( oc ).

You have logged in to OpenShift Container Platform as a user with cluster-admin and Operator installation permissions.

Create a YAML file with the configuration for creating a namespace:

Create the namespace by running the following command:

Create an OperatorGroup CR YAML file:

Create the OperatorGroup CR by running the following command:

Create a Subscription CR YAML file:

Create the Subscription CR by running the following command:

To verify that LVM Storage is installed, run the following command:

You can install LVM Storage by using the OpenShift Container Platform web console.

You have access to the cluster.

You have access to OpenShift Container Platform with cluster-admin and Operator installation permissions.

Log in to the OpenShift Container Platform web console.

Click Operators → OperatorHub .

Click LVM Storage on the OperatorHub page.

Set the following options on the Operator Installation page:

Update Channel as stable-4.15 .

Installation Mode as A specific namespace on the cluster .

Installed Namespace as Operator recommended namespace openshift-storage . If the openshift-storage namespace does not exist, it is created during the operator installation.

Update approval as Automatic or Manual .

Optional: Select the Enable Operator recommended cluster monitoring on this Namespace checkbox.

Click Install .

Verify that LVM Storage shows a green tick, indicating successful installation.

You can install LVM Storage on OpenShift Container Platform in a disconnected environment. All sections referenced in this procedure are linked in the "Additional resources" section.

You read the "About disconnected installation mirroring" section.

You have access to the OpenShift Container Platform image repository.

You created a mirror registry.

Follow the steps in the "Creating the image set configuration" procedure. To create an ImageSetConfiguration custom resource (CR) for LVM Storage, you can use the following example ImageSetConfiguration CR configuration:

Follow the procedure in the "Mirroring an image set to a mirror registry" section.

Follow the procedure in the "Configuring image registry repository mirroring" section.

Mirroring the OpenShift Container Platform image repository

Creating the image set configuration

Mirroring an image set to a mirror registry

Configuring image registry repository mirroring

Why use imagestreams

To install LVM Storage on the clusters by using Red Hat Advanced Cluster Management (RHACM), you must create a Policy custom resource (CR). You can also configure the criteria to select the clusters on which you want to install LVM Storage.

You have access to the RHACM cluster using an account with cluster-admin and Operator installation permissions.

You have dedicated disks that LVM Storage can use on each cluster.

The cluster must be be managed by RHACM.

Log in to the RHACM CLI using your OpenShift Container Platform credentials.

Create a namespace.

Create a Policy CR YAML file:

Create the Policy CR by running the following command:

Upon creating the Policy CR, the following custom resources are created on the clusters that match the selection criteria configured in the PlacementRule CR:

OperatorGroup

Subscription

About the LVMCluster custom resource

The limitations to configure the size of the devices that you can use to provision storage using LVM Storage are as follows:

The total storage size that you can provision is limited by the size of the underlying Logical Volume Manager (LVM) thin pool and the over-provisioning factor.

The size of the logical volume depends on the size of the Physical Extent (PE) and the Logical Extent (LE).

You can define the size of PE and LE during the physical and logical device creation.

The default PE and LE size is 4 MB.

If the size of the PE is increased, the maximum size of the LVM is determined by the kernel limits and your disk space.

Theoretical size.

Tested size.

You can configure the LVMCluster CR to perform the following actions:

Create LVM volume groups that you can use to provision persistent volume claims (PVCs).

Configure a list of devices that you want to add to the LVM volume groups.

Configure the requirements to select the nodes on which you want to create an LVM volume group, and the thin pool configuration for the volume group.

Force wipe the selected devices.

After you have installed LVM Storage, you must create an LVMCluster custom resource (CR).

Explanation of fields in the LVMCluster CR

The LVMCluster CR fields are described in the following table:

Wiping the device can lead to inconsistencies in data integrity if any of the following conditions are met:

The device is being used as swap space.

The device is part of a RAID array.

The device is mounted.

If any of these conditions are true, do not force wipe the disk. Instead, you must manually wipe the disk.

deviceClasses.thinPoolConfig

Contains the configuration to create a thin pool in the LVM volume group.

thinPoolConfig.name

Specify a name for the thin pool.

thinPoolConfig.sizePercent

Specify the percentage of space in the LVM volume group for creating the thin pool.

By default, this field is set to 90. The minimum value that you can set is 10, and the maximum value is 90.

thinPoolConfig.overprovisionRatio

Specify a factor by which you can provision additional storage based on the available storage in the thin pool.

For example, if this field is set to 10, you can provision up to 10 times the amount of available storage in the thin pool.

To disable over-provisioning, set this field to 1.

The deviceSelector field in the LVMCluster CR contains the configuration to specify the paths to the devices that you want to add to the LVM volume group.

You can specify the device paths in the deviceSelector.paths field, the deviceSelector.optionalPaths field, or both. If you do not specify the device paths in both the deviceSelector.paths field and the deviceSelector.optionalPaths field, LVM Storage adds the supported unused devices to the LVM volume group.

The devices that you want to add to the volume group must be supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage" in the "Additional resources" section.

If you do not add the deviceSelector field in the LVMCluster CR, LVM Storage automatically adds the new devices when the devices are available.

LVM Storage adds the devices to the LVM volume group only if the following conditions are met:

The device path exists.

The device is supported by LVM Storage.

You can also add the path to the RAID arrays to integrate the RAID arrays with LVM Storage. For more information, see "Integrating RAID arrays with LVM Storage" in the "Additional resources" section.

When you are adding the device paths in the deviceSelector field of the LVMCluster custom resource (CR), ensure that the devices are supported by LVM Storage. If you add paths to the unsupported devices, LVM Storage excludes the devices to avoid complexity in managing logical volumes.

If you do not specify any device path in the deviceSelector field, LVM Storage adds only the unused devices that it supports.

LVM Storage does not support the following devices:

Devices with the ro parameter set to true .

Devices with the state parameter set to suspended .

Devices with the type parameter set to rom .

Devices with the type parameter set to lvm .

Devices with the partlabel parameter set to bios , boot , or reserved .

Devices with the fstype parameter set to any value other than null or LVM2_member .

To get the information about the volume groups of the device, run the following command:

To get the mount points of a device, run the following command:

Ways to create an LVMCluster custom resource

You can create an LVMCluster custom resource (CR) by using the OpenShift CLI ( oc ) or the OpenShift Container Platform web console. If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can also create an LVMCluster CR by using RHACM.

Upon creating the LVMCluster CR, LVM Storage creates the following system-managed CRs:

A storageClass and volumeSnapshotClass for each device class.

LVMVolumeGroup : This CR is a specific type of persistent volume (PV) that is backed by an LVM volume group. It tracks the individual volume groups across multiple nodes.

LVMVolumeGroupNodeStatus : This CR tracks the status of the volume groups on a node.

You can reuse an existing volume group (VG) from the previous LVM Storage installation instead of creating a new VG.

You can only reuse a VG but not the logical volume associated with the VG.

The VG that you want to reuse must not be corrupted.

The VG that you want to reuse must have the lvms tag. For more information on adding tags to LVM objects, see Grouping LVM objects with tags .

Open the LVMCluster CR YAML file.

Configure the LVMCluster CR parameters as described in the following example:

Save the LVMCluster CR YAML file.

You can create the Redundant Array of Independent Disks (RAID) array by using the mdadm utility, and integrate the RAID array with LVM Storage. Logical Volume Manager (LVM) does not support creating a software RAID.

You can integrate the RAID array with LVM Storage while creating the LVMCluster custom resource (CR). If you have already created an LVMCluster CR, you can edit the existing LVMCluster CR to add the RAID array.

You created a software RAID during the OpenShift Container Platform installation.

You have installed LVM Storage.

Open the LVMCluster CR YAML file. If you have already created the LVMCluster CR, edit the existing LVMCluster CR by running the following command:

Add the path to the RAID array in the deviceSelector field of the LVMCluster CR YAML file.

Configuring a RAID-enabled data volume

Creating a software RAID on an installed system

Replacing a failed disk in RAID

Repairing RAID disks

You can create an LVMCluster custom resource (CR) on a worker node using the OpenShift CLI ( oc ).

You have logged in to OpenShift Container Platform as a user with cluster-admin privileges.

You have installed a worker node in the cluster.

You read the "About the LVMCluster custom resource" section. See the "Additional resources" section.

Create an LVMCluster custom resource (CR) YAML file:

Create the LVMCluster CR by running the following command:

Check that the LVMCluster CR is in the Ready state:

Optional: To view the storage classes created by LVM Storage for each device class, run the following command:

Optional: To view the volume snapshot classes created by LVM Storage for each device class, run the following command:

You can create an LVMCluster CR on a worker node using the OpenShift Container Platform web console.

You have access to the OpenShift Container Platform cluster with cluster-admin privileges.

Click Operators → Installed Operators .

In the openshift-storage namespace, click LVM Storage .

Click Create LVMCluster and select either Form view or YAML view .

Configure the required LVMCluster CR parameters.

Click Create .

Optional: If you want to edit the LVMCLuster CR, perform the following actions:

Click the LVMCluster tab.

From the Actions menu, select Edit LVMCluster .

Click YAML and edit the required LVMCLuster CR parameters.

Click Save .

On the LVMCLuster page, check that the LVMCluster CR is in the Ready state.

Optional: To view the available storage classes created by LVM Storage for each device class, click Storage → StorageClasses .

Optional: To view the available volume snapshot classes created by LVM Storage for each device class, click Storage → VolumeSnapshotClasses .

After you have installed LVM Storage by using RHACM, you must create an LVMCluster custom resource (CR).

You have installed LVM Storage by using RHACM.

You have access to the RHACM cluster using an account with cluster-admin permissions.

Create a ConfigurationPolicy CR YAML file with the configuration to create an LVMCluster CR:

Create the ConfigurationPolicy CR by running the following command:

Ways to delete an LVMCluster custom resource

You can delete an LVMCluster custom resource (CR) by using the OpenShift CLI ( oc ) or the OpenShift Container Platform web console. If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can also delete an LVMCluster CR by using RHACM.

Upon deleting the LVMCluster CR, LVM Storage deletes the following CRs:

storageClass

volumeSnapshotClass

LVMVolumeGroup

LVMVolumeGroupNodeStatus

You can delete the LVMCluster custom resource (CR) using the OpenShift CLI ( oc ).

You have access to OpenShift Container Platform as a user with cluster-admin permissions.

You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.

Log in to the OpenShift CLI ( oc ).

Delete the LVMCluster CR by running the following command:

To verify that the LVMCluster CR has been deleted, run the following command:

You can delete the LVMCluster custom resource (CR) using the OpenShift Container Platform web console.

Click Operators → Installed Operators to view all the installed Operators.

Click LVM Storage in the openshift-storage namespace.

From the Actions , select Delete LVMCluster .

Click Delete .

On the LVMCLuster page, check that the LVMCluster CR has been deleted.

If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can delete an LVMCluster CR by using RHACM.

You have access to the RHACM cluster as a user with cluster-admin permissions.

Delete the ConfigurationPolicy CR YAML file that was created for the LVMCluster CR:

Create a Policy CR YAML file to delete the LVMCluster CR:

Create a Policy CR YAML file to check if the LVMCluster CR has been deleted:

Check the status of the Policy CRs by running the following command:

After you have created the LVM volume groups using the LVMCluster custom resource (CR), you can provision the storage by creating persistent volume claims (PVCs).

The following are the minimum storage sizes that you can request for each file system type:

block : 8 MiB

xfs : 300 MiB

ext4 : 32 MiB

To create a PVC, you must create a PersistentVolumeClaim object.

You have created an LVMCluster CR.

Create a PersistentVolumeClaim object:

Create the PVC by running the following command:

To verify that the PVC is created, run the following command:

Ways to scale up the storage of clusters

OpenShift Container Platform supports additional worker nodes for clusters on bare metal user-provisioned infrastructure. You can scale up the storage of clusters either by adding new worker nodes with available storage or by adding new devices to the existing worker nodes.

Logical Volume Manager (LVM) Storage detects and uses additional worker nodes when the nodes become active.

To add a new device to the existing worker nodes on a cluster, you must add the path to the new device in the deviceSelector field of the LVMCluster custom resource (CR).

You can scale up the storage capacity of the worker nodes on a cluster by using the OpenShift CLI ( oc ).

You have additional unused devices on each cluster to be used by Logical Volume Manager (LVM) Storage.

You have created an LVMCluster custom resource (CR).

Edit the LVMCluster CR by running the following command:

Add the path to the new device in the deviceSelector field.

Save the LVMCluster CR.

You can scale up the storage capacity of the worker nodes on a cluster by using the OpenShift Container Platform web console.

Click the LVMCluster tab to view the LVMCluster CR created on the cluster.

Click the YAML tab.

Edit the LVMCluster CR to add the new device path in the deviceSelector field:

You can scale up the storage capacity of worker nodes on the clusters by using RHACM.

You have access to the RHACM cluster using an account with cluster-admin privileges.

You have created an LVMCluster custom resource (CR) by using RHACM.

Edit the LVMCluster CR that you created using RHACM by running the following command:

In the LVMCluster CR, add the path to the new device in the deviceSelector field.

After scaling up the storage of a cluster, you can expand the existing persistent volume claims (PVCs).

To expand a PVC, you must update the storage field in the PVC.

Dynamic provisioning is used.

The StorageClass object associated with the PVC has the allowVolumeExpansion field set to true .

Update the value of the spec.resources.requests.storage field to a value that is greater than the current value by running the following command:

To verify that resizing is completed, run the following command:

LVM Storage adds the Resizing condition to the PVC during expansion. It deletes the Resizing condition after the PVC expansion.

Enabling volume expansion support

You can delete a persistent volume claim (PVC) by using the OpenShift CLI ( oc ).

Delete the PVC by running the following command:

To verify that the PVC is deleted, run the following command:

The deleted PVC must not be present in the output of this command.

About volume snapshots

You can create snapshots of persistent volume claims (PVCs) that are provisioned by LVM Storage.

You can perform the following actions using the volume snapshots:

Back up your application data.

Revert to a state at which the volume snapshot was taken.

LVM Storage has the following limitations for creating volume snapshots in multi-node topology:

Creating volume snapshots is based on the LVM thin pool capabilities.

After creating a volume snapshot, the node must have additional storage space for further updating the original data source.

You can create volume snapshots only on the node where you have deployed the original data source.

Pods relying on the PVC that uses the snapshot data can be scheduled only on the node where you have deployed the original data source.

OADP features

You can create volume snapshots based on the available capacity of the thin pool and the over-provisioning limits. To create a volume snapshot, you must create a VolumeSnapshotClass object.

You ensured that the persistent volume claim (PVC) is in Bound state. This is required for a consistent snapshot.

You stopped all the I/O to the PVC.

Create a VolumeSnapshot object:

Create the volume snapshot in the namespace where you created the source PVC by running the following command:

LVM Storage creates a read-only copy of the PVC as a volume snapshot.

To verify that the volume snapshot is created, run the following command:

The value of the READYTOUSE field for the volume snapshot that you created must be true .

To restore a volume snapshot, you must create a persistent volume claim (PVC) with the dataSource.name field set to the name of the volume snapshot.

The restored PVC is independent of the volume snapshot and the source PVC.

You have created a volume snapshot.

Create a PersistentVolumeClaim object with the configuration to restore the volume snapshot:

Create the PVC in the namespace where you created the the volume snapshot by running the following command:

To verify that the volume snapshot is restored, run the following command:

You can delete the volume snapshots of the persistent volume claims (PVCs).

You have ensured that the volume snpashot that you want to delete is not in use.

Delete the volume snapshot by running the following command:

To verify that the volume snapshot is deleted, run the following command:

The deleted volume snapshot must not be present in the output of this command.

About volume clones

A volume clone is a duplicate of an existing persistent volume claim (PVC). You can create a volume clone to make a point-in-time copy of the data.

LVM Storage has the following limitations for creating volume clones in multi-node topology:

Creating volume clones is based on the LVM thin pool capabilities.

The node must have additional storage after creating a volume clone for further updating the original data source.

You can create volume clones only on the node where you have deployed the original data source.

Pods relying on the PVC that uses the clone data can be scheduled only on the node where you have deployed the original data source.

To create a clone of a persistent volume claim (PVC), you must create a PersistentVolumeClaim object in the namespace where you created the source PVC.

You ensured that the source PVC is in Bound state. This is required for a consistent clone.

Create the PVC in the namespace where you created the source PVC by running the following command:

To verify that the volume clone is created, run the following command:

You can delete volume clones.

Delete the cloned PVC by running the following command:

To verify that the volume clone is deleted, run the following command:

The deleted volume clone must not be present in the output of this command.

You can update LVM Storage to ensure compatibility with the OpenShift Container Platform version.

You have updated your OpenShift Container Platform cluster.

You have installed a previous version of LVM Storage.

You have access to the cluster using an account with cluster-admin permissions.

Update the Subscription custom resource (CR) that you created while installing LVM Storage by running the following command:

View the update events to check that the installation is complete by running the following command:

Verify the LVM Storage version by running the following command:

Monitoring LVM Storage

To enable cluster monitoring, you must add the following label in the namespace where you have installed LVM Storage:

You can monitor LVM Storage by viewing the metrics.

The following table describes the topolvm metrics:

When the thin pool and volume group reach maximum storage capacity, further operations fail. This can lead to data loss.

LVM Storage sends the following alerts when the usage of the thin pool and volume group exceeds a certain value:

You can uninstall LVM Storage using the OpenShift Container Platform web console.

You have deleted the LVMCluster custom resource (CR).

Click Operators → Installed Operators .

Click the Details tab.

From the Actions menu, select Uninstall Operator .

Optional: When prompted, select the Delete all operand instances for this operator checkbox to delete the operand instances for LVM Storage.

Click Uninstall .

To uninstall LVM Storage that you installed using RHACM, you must delete the RHACM Policy custom resource (CR) that you created for installing and configuring LVM Storage.

You have deleted the LVMCluster CR that you created using RHACM.

Delete the RHACM Policy CR that you created for installing and configuring LVM Storage by using the following command:

Create a Policy CR YAML file with the configuration to uninstall LVM Storage:

When LVM Storage is unable to automatically resolve a problem, use the must-gather tool to collect the log files and diagnostic information so that you or the Red Hat Support can review the problem and determine a solution.

Run the must-gather command from the client connected to the LVM Storage cluster:

About the must-gather tool

ProjectPractical.com

Top 33 NetApp Interview Questions and Answers 2024

Editorial Team

NetApp Interview Questions and Answers

Preparing for an interview can often feel like a daunting task, especially when it comes to positions related to technology and data management. NetApp, being a leader in the data services sector, requires candidates to have a solid understanding of its systems, products, and the underlying technologies. Whether you are a seasoned professional or a newcomer aiming to make your mark in the world of data services, getting acquainted with the most common interview questions for NetApp can provide a significant advantage.

This guide has been meticulously put together to offer a comprehensive overview of the top 33 NetApp interview questions along with their answers. It aims to equip candidates with the knowledge and confidence needed to excel in their interviews. By familiarizing yourself with these queries and their responses, you can approach your NetApp interview with greater assurance and poise, significantly enhancing your chances of success.

NetApp Interview Preparation Tips

  • Focus on practical applications of your technical knowledge, especially how it relates to NetApp’s products and services.
  • Demonstrating an understanding of how NetApp’s solutions can be applied in real-world scenarios can set you apart.
  • Stay updated on the latest trends in cloud computing, data storage, and networking as they relate to NetApp’s offerings.

1. What Do You Know About NetApp And Its Products And Solutions?

Tips to Answer:

  • Relate your answer to specific experiences or use cases you’ve had with NetApp’s products, showcasing your direct knowledge and expertise.
  • Highlight how keeping up-to-date with NetApp developments has enabled you to solve complex storage challenges or improve system efficiencies.

Sample Answer: I’ve worked extensively with NetApp’s suite of products, particularly their ONTAP operating system, which has been instrumental in managing data storage and ensuring high availability and disaster recovery. My experience ranges from implementing their AFF systems for high-performance needs to utilizing their FAS series for more versatile storage requirements. I keep abreast of the latest updates through webinars, online forums, and NetApp’s technical documentation. This proactive approach has empowered me to leverage NetApp’s data protection features, like SnapMirror for efficient data replication, significantly enhancing the data recovery strategies for my clients.

2. How Do You Stay Up-To-Date With The Latest Developments In The Storage Industry?

  • Research and follow industry-leading blogs, websites, and forums dedicated to storage technologies and trends.
  • Participate in webinars, workshops, and conferences to network with professionals and learn about emerging technologies directly from experts.

Sample Answer: I actively follow several key tech blogs and industry news platforms to keep abreast of the latest developments in the storage industry. Regularly attending webinars and industry conferences also plays a crucial part in my learning, allowing me to hear firsthand from leading experts about new technologies and methodologies. This approach not only helps me stay informed about the latest trends but also enables me to apply this knowledge practically in designing and implementing storage solutions for my clients, ensuring they benefit from the most current and efficient technologies available.

3. Can You Explain The Difference Between HDD And SSD?

  • Highlight the key differences such as speed, durability, and use cases.
  • Use personal experience or examples to illustrate the differences and benefits of each.

Sample Answer: In my experience, the primary difference between HDD (Hard Disk Drive) and SSD (Solid State Drive) lies in their construction and performance. HDDs use mechanical parts and magnetic storage, which makes them slower in read/write speeds compared to SSDs. On the other hand, SSDs use flash memory to store data, leading to significantly faster data access and higher speeds. This difference also impacts their durability; SSDs, without moving parts, are more resistant to physical shock and run more quietly. Additionally, SSDs consume less power, which can be a critical factor in portable devices. In practice, I’ve recommended SSDs for systems requiring fast boot times and high-speed data access, while HDDs can be more cost-effective for bulk storage where speed is not as critical.

4. How Do You Approach Designing And Implementing A Storage Solution For A New Customer?

  • Start by assessing the customer’s specific needs and current infrastructure to tailor the storage solution effectively.
  • Highlight the importance of scalability and flexibility in your designs to accommodate future growth and changes in technology.

Sample Answer: In designing and implementing a storage solution for a new customer, I first conduct a thorough analysis of their business requirements, data volume, and growth projections. This involves engaging with key stakeholders to understand their objectives and challenges. Based on this insight, I recommend a storage architecture that aligns with their performance needs, budget, and future scalability. For instance, if a customer prioritizes high-speed data access, I might suggest an SSD-based solution, while also considering hybrid models to balance cost and performance. I ensure the design is flexible, to adapt to evolving needs, and emphasize robust data protection and disaster recovery strategies.

5. Can You Describe Your Experience With NetApp’s ONTAP Operating System?

  • Reflect on specific projects where you utilized ONTAP features such as data protection, efficiency, or scalability. Provide concrete examples.
  • Highlight your ability to leverage ONTAP to solve business challenges, improve system performance, or optimize storage resources.

Sample Answer: I’ve worked extensively with NetApp’s ONTAP operating system for over five years, focusing on optimizing storage efficiency and ensuring robust data protection for my clients. One project involved deploying ONTAP to consolidate storage systems, which resulted in a 30% improvement in storage utilization and significantly reduced operational costs. I’ve also leveraged its data protection features, such as SnapMirror and SnapVault, to enhance disaster recovery strategies for several organizations, ensuring business continuity amid potential data threats. My experience with ONTAP has been instrumental in not just addressing, but anticipating the storage needs of my clients, allowing for scalable, secure, and efficient storage solutions.

6. How Do You Ensure Data Availability And Disaster Recovery For Your Clients?

  • Highlight your understanding of disaster recovery principles, including regular backups, off-site storage, and disaster recovery planning.
  • Mention your experience with specific tools or methodologies that enhance data availability and ensure robust disaster recovery strategies.

Sample Answer: In ensuring data availability and disaster recovery for my clients, I start by conducting a thorough risk assessment to understand potential threats to data integrity. Based on this, I develop a comprehensive disaster recovery plan that includes regular backups, with both on-site and off-site storage options to protect against data loss from physical damage. I leverage technologies such as synchronous and asynchronous replication to ensure that data is continuously available across multiple locations, minimizing downtime. I also conduct regular disaster recovery drills to ensure that the recovery process is efficient and that all team members are familiar with the recovery procedures. My goal is always to minimize data loss and recovery time, ensuring business continuity for my clients.

7. Can You Explain The Concept Of Thin Provisioning And How It Can Benefit A Customer?

  • Highlight the cost-effectiveness and improved storage utilization benefits of thin provisioning.
  • Use specific examples or scenarios where thin provisioning has directly benefited past projects or customers.

Sample Answer: In my experience, thin provisioning is a method that allows for the efficient allocation of storage space among multiple users or applications without immediately allocating the physical storage. This means it allocates storage space on a need basis rather than upfront. One key benefit I’ve seen firsthand is the reduction in upfront storage costs for a customer. Instead of purchasing large amounts of storage that remain unused for a long period, they can invest in what they need when they need it, which significantly cuts down on wasted resources. Additionally, it simplifies storage management and increases flexibility as the demand for storage grows. For instance, in a previous project, we implemented thin provisioning for a client’s database system, which reduced their initial storage procurement by 40%, showcasing significant cost savings and operational efficiency.

8. How Do You Approach Capacity Planning For A Customer’s Storage Needs?

  • Start by understanding the customer’s current and anticipated storage requirements, including data growth trends and any seasonal fluctuations.
  • Use tools and software for predictive analysis and modeling to accurately forecast future storage needs, ensuring scalability and flexibility in the storage solution.

Sample Answer: In approaching capacity planning for a customer’s storage needs, I first gather detailed information about their existing data environment, usage patterns, and business objectives. This involves analyzing historical data usage trends and discussing with key stakeholders to understand anticipated growth or changes in their operations. I then leverage predictive analytics tools to model future requirements, taking into account factors like data retention policies and potential for data consolidation. My goal is to design a scalable, efficient storage solution that aligns with the customer’s budget and ensures they can seamlessly accommodate future data growth without unnecessary expenditure or disruptions.

9. Can You Describe Your Experience With NetApp’s FlexPod Converged Infrastructure Solution?

  • Focus on specific projects or tasks you’ve worked on that involved FlexPod, highlighting your role and the outcomes.
  • Discuss how your experience with FlexPod has helped improve efficiency, scalability, or performance for your clients or organization.

Sample Answer: My experience with NetApp’s FlexPod converged infrastructure solution has been deeply rewarding. In one of my notable projects, I was responsible for designing and implementing a FlexPod solution for a client looking to upgrade their data center infrastructure. My role involved conducting a thorough needs assessment, designing the architecture, and overseeing the deployment process. The FlexPod solution enabled us to integrate compute, storage, and networking into a single, streamlined system, significantly enhancing our client’s operational efficiency and data management capabilities. This experience not only honed my technical skills but also improved my ability to deliver solutions that directly address client needs.

10. How Do You Ensure Data Security And Compliance For Your Clients?

  • Focus on specific technologies and strategies you have used in the past to enhance data security and ensure compliance with industry standards and regulations.
  • Share examples of how you have tailored security measures to meet the unique needs of different clients, demonstrating your ability to think critically and customize solutions.

Sample Answer: In my experience, ensuring data security and compliance starts with a thorough understanding of the client’s industry regulations and data protection needs. I always begin by conducting a risk assessment to identify potential vulnerabilities within their existing systems. Based on this assessment, I implement layered security measures including encryption, multi-factor authentication, and regular security audits. For compliance, I stay updated on regulations like GDPR and HIPAA, ensuring that all storage solutions are designed to meet these standards. I also educate my clients on the importance of ongoing compliance monitoring and reporting, providing them with the tools and knowledge they need to maintain secure and compliant data environments.

11. Can You Explain The Difference Between iSCSI And Fibre Channel Protocols?

  • Focus on the technical differences between iSCSI and Fibre Channel, including how each protocol handles data transfer.
  • Use specific examples from your experience where choosing one protocol over the other made a significant impact on project outcomes.

Sample Answer: In my work, I’ve found that iSCSI, which operates over TCP/IP networks, provides a cost-effective solution by utilizing existing network infrastructure. This flexibility allows for easier scalability and integration into diverse environments. On the other hand, Fibre Channel, with its dedicated high-speed network, offers superior performance and reliability, essential for environments where data integrity and speed are critical. For instance, in a recent deployment for a financial services firm, the choice of Fibre Channel was crucial due to its low latency and high throughput, ensuring real-time access to sensitive transactions.

12. How Do You Approach Troubleshooting Performance Issues In A Storage Environment?

  • Start by systematically identifying the issue, including checking for common problems like hardware failures, software misconfigurations, or network bottlenecks.
  • Utilize diagnostic tools and logs to gather data, and apply your knowledge of the storage system’s architecture to pinpoint the root cause.

Sample Answer: When troubleshooting storage performance issues, my first step is to isolate the problem area, whether it’s related to hardware, software, or the network. I check for any recent changes in the environment that could have impacted performance. Using tools like system logs and performance monitoring software, I analyze data throughput, I/O rates, and error rates to identify any anomalies. Understanding the storage architecture allows me to hypothesize potential issues, which I then test systematically. Communication with the team is key, ensuring we’re aligned in our approach and leveraging collective insights to resolve the issue efficiently.

13. Can You Describe Your Experience With NetApp’s SnapMirror And SnapVault Data Protection Solutions?

  • Highlight specific projects or experiences where you utilized SnapMirror and SnapVault, focusing on the challenges you faced and how you overcame them.
  • Emphasize the benefits these solutions offered to past projects or organizations, like improved data recovery times or enhanced data protection.

Sample Answer: In my previous role, I was responsible for implementing NetApp’s SnapMirror and SnapVault in our organization’s data protection strategy. With SnapMirror, I was able to efficiently replicate critical data across our data centers, ensuring high availability and quick disaster recovery. SnapVault was instrumental in our long-term retention strategy, allowing us to efficiently backup and archive our data. One challenge I encountered was optimizing the network bandwidth for SnapMirror. I overcame this by scheduling replication during off-peak hours and using NetApp’s compression features, which significantly improved our replication times without impacting our network performance. This experience taught me the importance of not just implementing solutions, but also fine-tuning them to meet the specific needs and constraints of the organization.

14. How Do You Approach Migrating Data From A Legacy Storage System To A New NetApp Solution?

  • Start by assessing the existing data and storage systems to understand the scope and requirements of the migration.
  • Plan a detailed migration strategy that minimizes downtime and ensures data integrity, considering factors like data prioritization, testing, and validation processes.

Sample Answer: In approaching data migration from a legacy storage system to a new NetApp solution, I first conduct a comprehensive audit of the current environment to identify the data types, sizes, and any potential challenges. I prioritize data based on the business needs, ensuring critical data is migrated first. My plan includes a phased approach, allowing for testing and validation at each stage to ensure data integrity and minimal disruption. I leverage NetApp’s powerful tools and features, such as SnapMirror for data replication, ensuring a smooth transition. Communication with stakeholders is key throughout the process to manage expectations and address any concerns promptly.

15. Can You Explain The Concept Of Deduplication And How It Can Improve Storage Efficiency?

  • Focus on explaining deduplication in simple terms, highlighting its impact on reducing storage requirements by removing duplicate copies of data.
  • Share a real-life example or scenario where deduplication significantly improved storage efficiency, making sure to detail the context and outcome.

Sample Answer: In my experience, deduplication is a technique used to eliminate redundant data in the storage system, which significantly enhances storage efficiency. By storing only unique instances of data and creating references for any duplicates, we can drastically reduce the amount of physical storage space required. For instance, in a project I led, implementing deduplication allowed us to cut down our client’s storage needs by 50%, enabling them to allocate resources more effectively and reduce costs. This process not only optimizes storage but also speeds up data backup and recovery processes, making it a vital strategy in data management.

  • Focus on explaining deduplication in simple terms and how it directly impacts storage efficiency.
  • Give a real-world example or a brief case study to illustrate the benefits of deduplication.

Sample Answer: In my experience, deduplication is a data compression technique that eliminates duplicate copies of repeating data. By applying deduplication, we significantly reduce the storage space required for data. For instance, if a company sends the same 1MB presentation to 100 employees, traditional storage would need 100MB. With deduplication, only one instance of the presentation is stored, and pointers are used for the rest, drastically cutting down the storage need to just 1MB plus a small amount of overhead for the pointers. This not only saves on physical storage costs but also improves bandwidth efficiency for backup and replication processes, making it a key strategy in managing data growth effectively.

17. Can You Describe Your Experience With NetApp’s OnCommand Management Software?

  • Highlight specific instances where you utilized OnCommand software to solve business challenges or improve storage management.
  • Mention any certifications or training related to NetApp products, especially OnCommand, to demonstrate your proficiency and commitment to staying updated with the technology.

Sample Answer: I’ve had extensive experience with NetApp’s OnCommand management software, primarily using it to streamline operations and enhance storage efficiency in several projects. One key project involved deploying OnCommand Insight for a client facing challenges with storage visibility across their hybrid environment.

18. How Do You Approach Designing and Implementing A Data Archiving Solution For A Customer?

  • Focus on understanding the customer’s specific data retention needs and regulatory compliance requirements.
  • Emphasize the importance of assessing data access patterns to optimize the archiving strategy.

Sample Answer: When designing a data archiving solution, I start by closely working with the customer to understand their data retention policies and regulatory needs. This involves identifying the types of data they generate, how often it needs to be accessed, and any industry-specific compliance requirements they must adhere to. Based on this, I propose an archiving strategy that balances cost with accessibility, ensuring that infrequently accessed data is moved to a more cost-effective storage tier without compromising on retrieval times. I leverage technologies that automate the archiving process, making it seamless and scalable. This approach not only ensures compliance and data protection but also optimizes the customer’s storage costs and system performance.

19. Can You Explain The Concept Of Snapshots And How They Can Be Used For Data Protection?

  • Include examples from your personal or professional experience to illustrate how you’ve successfully utilized snapshots in data protection strategies.
  • Highlight the benefits of using snapshots, such as their efficiency and ability to minimize downtime, while also touching on any limitations they may have.

Sample Answer: In my experience, snapshots are incredibly valuable for data protection. They provide a point-in-time copy of data, which I’ve used extensively to quickly recover systems after data corruption or loss incidents. For instance, at my previous job, we had a situation where a critical database was accidentally corrupted. Thanks to the snapshots we had in place, we were able to restore the database to its state right before the corruption occurred, with minimal downtime. Snapshots are efficient because they only record changes made to the data, which also helps in conserving storage space. However, it’s important to pair them with other data protection methods since they depend on the health of the primary data.

20. How Do You Ensure High Availability And Redundancy In A Storage Environment?

  • Highlight your understanding of different high availability (HA) and redundancy techniques specific to storage environments.
  • Share a real-life example where you successfully implemented such strategies to maintain system uptime and data integrity.

Sample Answer: In my experience, ensuring high availability and redundancy in a storage environment begins with a thorough assessment of the current infrastructure. I focus on deploying RAID configurations and mirroring data across multiple disks and locations to prevent data loss in case of hardware failure. For instance, at my last position, I implemented a RAID 6 configuration along with synchronous replication to a secondary site. This approach not only provided data protection against multiple disk failures but also ensured business continuity by enabling rapid failover to the replicated site without downtime. Tailoring the strategy to meet the specific needs and risk profile of each customer is key to delivering a resilient storage solution.

21. Can You Describe Your Experience With NetApp’s Cloud Volumes ONTAP Solution?

  • Focus on specific projects or tasks where you utilized Cloud Volumes ONTAP, highlighting your contributions and the outcomes.
  • Mention any challenges you faced while working with Cloud Volumes ONTAP and how you overcame them, showcasing your problem-solving skills.

Sample Answer: In my previous role, I had the opportunity to deploy NetApp’s Cloud Volumes ONTAP for a medium-sized enterprise aiming to optimize their cloud storage efficiency. My role involved planning the deployment, configuring the solution to meet the client’s specific storage needs, and ensuring a seamless migration of data from their legacy systems. I faced a challenge with data replication across different regions, which I resolved by implementing NetApp’s SnapMirror technology, significantly reducing the replication times. This experience not only enhanced my technical skills but also reinforced the importance of adaptability and innovation in cloud storage solutions.

22. How Do You Approach Capacity Optimization In A Storage Environment?

  • Focus on demonstrating your understanding of various techniques and tools used for capacity optimization, including data deduplication, compression, and storage tiering.
  • Highlight your ability to analyze data usage patterns and adjust resources accordingly to ensure efficient storage utilization and cost-effectiveness.

Sample Answer: In approaching capacity optimization in a storage environment, I start by conducting a comprehensive analysis of current data storage utilization and growth trends. This involves leveraging tools for monitoring and reporting on storage consumption. Based on the findings, I employ data deduplication and compression techniques to reduce the storage footprint. I actively use storage tiering to move less frequently accessed data to lower-cost storage options without compromising data accessibility. Regular audits and adjustments are crucial to adapting to changing data patterns and ensuring optimal storage efficiency. This strategy not only maximizes storage utilization but also significantly cuts down on costs.

23. Can You Explain The Concept Of Data Tiering And How It Can Improve Storage Performance?

  • Focus on explaining what data tiering is, including the process of moving data between different types of storage media based on its usage and value.
  • Highlight the benefits of data tiering, such as improved storage efficiency and performance, as well as cost savings.

Sample Answer: In my experience, data tiering is a strategic approach to managing storage resources efficiently. It involves categorizing data into different tiers based on its importance, frequency of access, and other criteria. By doing so, frequently accessed data is stored on high-performance storage systems like SSDs, while less critical data is moved to lower-cost, higher-capacity storage options such as HDDs or even cloud storage. This method not only optimizes storage performance by ensuring quick access to important data but also reduces storage costs by allocating resources more effectively. Implementing data tiering has allowed me to significantly improve storage management for various projects, enhancing both performance and cost-efficiency.

24. How Do You Approach Data Backup And Recovery For A Customer?

  • Focus on understanding the customer’s specific needs and environment before recommending solutions.
  • Highlight the importance of regular testing and validation of backup and recovery processes.

Sample Answer: In approaching data backup and recovery, I first assess the customer’s current data landscape and recovery objectives. I consider the criticality of their data and applications to identify the right backup frequency and the recovery time objective (RTO) and recovery point objective (RPO) requirements. I recommend solutions that align with these needs, whether it’s on-premises, cloud-based, or a hybrid model, ensuring it offers scalability and reliability. Regular testing of the backup and recovery process is crucial for me, as it validates the effectiveness of the strategy and allows for timely adjustments to address any gaps.

25. Can You Describe Your Experience With NetApp’s FabricPool Technology?

  • Focus on specific projects where you utilized FabricPool technology to address storage efficiency and cost-effectiveness.
  • Highlight your understanding of how FabricPool works, including its ability to tier cold data to lower-cost storage options without sacrificing accessibility.

Sample Answer: In my recent project, I leveraged NetApp’s FabricPool technology to optimize storage for a client dealing with vast amounts of cold data. By implementing FabricPool, I was able to automatically tier this less frequently accessed data to a more cost-effective storage solution, significantly reducing storage costs while maintaining quick access when needed. This experience enhanced my skills in managing storage resources efficiently, showcasing my ability to tailor solutions that directly address client needs for both performance and budget.

26. How Do You Approach Designing And Implementing A Multi-Tenant Storage Solution For A Customer?

  • Focus on the specific needs and challenges of implementing multi-tenant environments, emphasizing security, scalability, and performance.
  • Highlight your ability to leverage technologies and features that support isolation, resource allocation, and monitoring to ensure a successful deployment.

Sample Answer: In designing a multi-tenant storage solution, my first step is to thoroughly understand the client’s requirements and the data workload characteristics of each tenant. I prioritize security to ensure that tenants cannot access each other’s data, utilizing encryption and role-based access controls. Scalability is also key, so I design the architecture to easily scale out as the number of tenants or their data needs grow. Performance isolation is crucial; I use quality of service (QoS) features to guarantee that one tenant’s workload does not impact others’. I leverage NetApp’s capabilities, such as its multi-tenant features and storage efficiency technologies, to meet these needs effectively.

27. Can You Explain The Concept Of Quality Of Service (QoS) In A Storage Environment?

  • Provide specific examples of how QoS can be used to prioritize data access or limit bandwidth for less critical applications.
  • Mention the importance of QoS in maintaining performance consistency and meeting SLAs in multi-tenant environments or when dealing with mixed workloads.

Sample Answer: In my experience, Quality of Service (QoS) is pivotal for managing and optimizing the performance of storage resources. It allows administrators to prioritize access for critical applications ensuring they receive the necessary bandwidth and IOPS. For instance, in a multi-tenant environment, I applied QoS policies to ensure high-priority applications had guaranteed performance levels, while limiting resources for less critical workloads. This approach not only maximized the efficiency of our storage systems but also helped in maintaining consistent performance levels, crucial for meeting SLAs and enhancing user satisfaction.

28. How Do You Approach Capacity Planning For A Cloud-Based Storage Solution?

  • Highlight the importance of understanding the customer’s current and future storage needs to ensure scalability and cost-efficiency.
  • Discuss the use of analytical tools and techniques for predicting growth and managing data lifecycle efficiently.

Sample Answer: In approaching capacity planning for a cloud-based storage solution, I start by gathering detailed information about the customer’s existing data volume and growth trends. This involves analyzing their business objectives, data usage patterns, and anticipated future needs. I leverage cloud analytics tools to forecast growth and identify peak usage times, ensuring the solution scales effectively. I also consider data redundancy and archival strategies to optimize costs without compromising on accessibility or security. By aligning the storage solution closely with the customer’s business trajectory, I ensure that they can scale their operations seamlessly while maintaining control over costs.

29. Can You Describe Your Experience With NetApp’s StorageGRID Solution?

  • Highlight specific projects or use cases where you implemented or managed StorageGRID, emphasizing the unique challenges you encountered and how you overcame them.
  • Mention any certifications or training you have completed related to NetApp technologies, specifically StorageGRID, to establish your expertise.

Sample Answer: In my last role, I had the opportunity to deploy NetApp’s StorageGRID for a large healthcare client. The client required a secure, scalable storage solution to manage their growing repository of patient records and imaging files. I led the project from the planning phase through to implementation, focusing on customizing StorageGRID to meet the client’s specific data retention and security needs. We leveraged StorageGRID’s object storage capabilities to improve data accessibility and reliability across multiple locations. I also facilitated staff training sessions to ensure the client’s team could effectively manage the system post-deployment. My certification in NetApp solutions was instrumental in navigating the complexities of this project.

30. How Do You Approach Designing And Implementing A Disaster Recovery Solution For A Customer?

  • Understand the specific needs and business requirements of the customer to tailor the disaster recovery plan effectively.
  • Emphasize the importance of regular testing and updates to the disaster recovery plan to ensure it remains effective over time.

Sample Answer: When designing a disaster recovery solution, I start by assessing the customer’s critical data and systems that need protection. I engage with stakeholders to understand their recovery time objectives (RTO) and recovery point objectives (RPO), ensuring the solution aligns with their business continuity goals. I then propose a solution that not only meets these requirements but is also scalable and cost-effective. This often involves leveraging cloud-based DR solutions for flexibility and efficiency. Regular testing of the DR plan is crucial, so I incorporate scheduled drills to guarantee the plan’s effectiveness and make necessary adjustments based on feedback and evolving business needs.

31. Can You Explain The Concept Of Data Protection As A Service (DPaaS)?

  • Demonstrate your understanding of DPaaS by highlighting its key features and benefits, such as cost-efficiency, scalability, and reliability.
  • Share specific examples from your experience where DPaaS solutions effectively supported business continuity and disaster recovery efforts.

Sample Answer: In my role, I’ve leveraged DPaaS to provide robust, scalable, and cost-effective data protection solutions for our clients. DPaaS streamlines the backup and recovery process, allowing businesses to focus on core operations while ensuring their data is secure and easily recoverable in the event of a disaster. For example, I implemented a DPaaS solution that significantly reduced the backup window for a client, while also providing them with real-time data replication. This ensured not only high availability of their critical data but also compliance with industry data protection standards.

32. How Do You Approach Capacity Planning For A Containerized Storage Environment?

  • Understand the specific requirements and workloads of the applications running in containers to estimate storage needs accurately.
  • Leverage tools and technologies for monitoring and analyzing storage utilization trends to anticipate future capacity needs.

Sample Answer: In approaching capacity planning for a containerized storage environment, I start by thoroughly understanding the unique storage requirements of the applications we’re containerizing. This involves analyzing current data usage patterns and growth rates. I ensure to factor in the scalability and flexibility needs specific to containerized applications. I use tools like Prometheus and Grafana for real-time monitoring and historical data analysis, which helps in predicting future storage needs. Additionally, I consider the benefits of dynamic storage provisioning in Kubernetes environments to efficiently manage storage resources and avoid over-provisioning. My focus is always on ensuring optimal performance and cost-efficiency while preparing for scale.

33. Can You Describe Your Experience With NetApp’s Trident Storage Orchestrator For Kubernetes?

  • Focus on specific projects or tasks where you utilized Trident to manage storage solutions in a Kubernetes environment. Highlight any challenges you overcame or efficiencies you achieved.
  • Mention your familiarity with Trident’s integration into Kubernetes, emphasizing how it simplifies persistent volume provisioning, and your ability to leverage Trident for dynamic storage management.

Sample Answer: In my previous role, I was tasked with enhancing our Kubernetes clusters’ storage capabilities. We chose NetApp’s Trident as our storage orchestrator. My first step was to integrate Trident into our existing Kubernetes setup, which initially presented a learning curve. However, once in place, Trident significantly streamlined our persistent volume provisioning process. I particularly appreciated how Trident allowed for dynamic storage allocation, which was a game-changer for our DevOps team, enabling us to automate and optimize storage provisioning based on our applications’ needs. I also spearheaded a project to leverage Trident’s capabilities for a high-availability setup, ensuring that our applications remained available and performant, even under high load. Throughout my experience, I found that my ability to adapt and leverage Trident’s full suite of features was key to improving our storage solutions within Kubernetes environments.

In preparing for a NetApp interview, diving into these top 33 questions and answers is a solid strategy to build your confidence and knowledge. They not only cover the fundamentals of NetApp technologies but also delve into more complex scenarios you may face in the role. Remember, beyond technical expertise, demonstrating how you approach problem-solving, adapt to new challenges, and communicate complex ideas simply can set you apart. With a thorough understanding of these questions and a mindset geared towards continuous learning and adaptability, you’ll be well-equipped to tackle your NetApp interview and embark on a rewarding career in the field.

  • Top 25 LINQ Interview Questions and Answers in 2024
  • Top 25 Operations Coordinator Interview Questions and Answers in 2024
  • Top 33 Student Advisor Interview Questions and Answers 2024
  • Top 33 Apache Kafka Streams Interview Questions and Answers 2024

most recent

Creditors Clerk Interview Questions and Answers

Top 33 Creditors Clerk Interview Questions and Answers 2024

Walmart Assistant Manager Interview Questions and Answers

Top 33 Walmart Assistant Manager Interview Questions and Answers 2024

Assistant Property Manager Interview Questions and Answers

Top 33 Assistant Property Manager Interview Questions and Answers 2024

© 2024 Copyright ProjectPractical.com

  • ONTAP 9.6 commands
  • application provisioning config modify
  • application provisioning config show
  • autobalance aggregate show-aggregate-state
  • autobalance aggregate show-unbalanced-volume-state
  • autobalance aggregate config modify
  • autobalance aggregate config show
  • cluster add-node-status
  • cluster add-node
  • cluster create
  • cluster join
  • cluster modify
  • cluster ping-cluster
  • cluster remove-node
  • cluster setup
  • cluster show
  • cluster contact-info modify
  • cluster contact-info show
  • cluster date modify
  • cluster date show
  • cluster date zoneinfo load-from-uri
  • cluster date zoneinfo show
  • cluster ha modify
  • cluster ha show
  • cluster identity modify
  • cluster identity show
  • cluster image cancel-update
  • cluster image pause-update
  • cluster image resume-update
  • cluster image show-update-history
  • cluster image show-update-log-detail
  • cluster image show-update-log
  • cluster image show-update-progress
  • cluster image show
  • cluster image update
  • cluster image validate
  • cluster image package delete
  • cluster image package get
  • cluster image package show-repository
  • cluster kernel-service show
  • cluster kernel-service config modify
  • cluster kernel-service config show
  • cluster log-forwarding create
  • cluster log-forwarding delete
  • cluster log-forwarding modify
  • cluster log-forwarding show
  • cluster peer create
  • cluster peer delete
  • cluster peer modify-local-name
  • cluster peer modify
  • cluster peer ping
  • cluster peer show
  • cluster peer connection show
  • cluster peer health show
  • cluster peer offer cancel
  • cluster peer offer modify
  • cluster peer offer show
  • cluster peer policy modify
  • cluster peer policy show
  • cluster quorum-service options modify
  • cluster quorum-service options show
  • cluster ring show
  • cluster statistics show
  • cluster time-service ntp key create
  • cluster time-service ntp key delete
  • cluster time-service ntp key modify
  • cluster time-service ntp key show
  • cluster time-service ntp security modify
  • cluster time-service ntp security show
  • cluster time-service ntp server create
  • cluster time-service ntp server delete
  • cluster time-service ntp server modify
  • cluster time-service ntp server reset
  • cluster time-service ntp server show
  • cluster time-service ntp status show
  • event catalog show
  • event config force-sync
  • event config modify
  • event config set-proxy-password
  • event config show
  • event destination create
  • event destination delete
  • event destination modify
  • event destination show
  • event filter copy
  • event filter create
  • event filter delete
  • event filter rename
  • event filter show
  • event filter test
  • event filter rule add
  • event filter rule delete
  • event filter rule reorder
  • event log show
  • event mailhistory delete
  • event mailhistory show
  • event notification create
  • event notification delete
  • event notification modify
  • event notification show
  • event notification destination create
  • event notification destination delete
  • event notification destination modify
  • event notification destination show
  • event notification history show
  • event route add-destinations
  • event route modify
  • event route remove-destinations
  • event route show
  • event snmphistory delete
  • event snmphistory show
  • event status show
  • job show-bynode
  • job show-cluster
  • job show-completed
  • job unclaim
  • job watch-progress
  • job history show
  • job initstate show
  • job private delete
  • job private pause
  • job private resume
  • job private show-completed
  • job private show
  • job private stop
  • job private watch-progress
  • job schedule delete
  • job schedule show-jobs
  • job schedule show
  • job schedule cron create
  • job schedule cron delete
  • job schedule cron modify
  • job schedule cron show
  • job schedule interval create
  • job schedule interval delete
  • job schedule interval modify
  • job schedule interval show
  • lun maxsize
  • lun move-in-volume
  • lun bind create
  • lun bind destroy
  • lun bind show
  • lun copy cancel
  • lun copy modify
  • lun copy pause
  • lun copy resume
  • lun copy show
  • lun copy start
  • lun igroup add
  • lun igroup bind
  • lun igroup create
  • lun igroup delete
  • lun igroup disable-aix-support
  • lun igroup modify
  • lun igroup remove
  • lun igroup rename
  • lun igroup show
  • lun igroup unbind
  • lun import create
  • lun import delete
  • lun import pause
  • lun import prepare-to-downgrade
  • lun import resume
  • lun import show
  • lun import start
  • lun import stop
  • lun import throttle
  • lun import verify start
  • lun import verify stop
  • lun mapping add-reporting-nodes
  • lun mapping create
  • lun mapping delete
  • lun mapping remove-reporting-nodes
  • lun mapping show-initiator
  • lun mapping show
  • lun move cancel
  • lun move modify
  • lun move pause
  • lun move resume
  • lun move show
  • lun move start
  • lun persistent-reservation clear
  • lun persistent-reservation show
  • lun portset add
  • lun portset create
  • lun portset delete
  • lun portset remove
  • lun portset show
  • lun transition show
  • lun transition start
  • lun transition 7-mode delete
  • lun transition 7-mode show
  • metrocluster configure
  • metrocluster heal
  • metrocluster modify
  • metrocluster show
  • metrocluster switchback
  • metrocluster switchover
  • metrocluster check disable-periodic-check
  • metrocluster check enable-periodic-check
  • metrocluster check run
  • metrocluster check show
  • metrocluster check aggregate show
  • metrocluster check cluster show
  • metrocluster check config-replication show-aggregate-eligibility
  • metrocluster check config-replication show-capture-status
  • metrocluster check config-replication show
  • metrocluster check connection show
  • metrocluster check lif repair-placement
  • metrocluster check lif show
  • metrocluster check node show
  • metrocluster check volume show
  • metrocluster config-replication cluster-storage-configuration modify
  • metrocluster config-replication cluster-storage-configuration show
  • metrocluster config-replication resync-status show
  • metrocluster configuration-settings show-status
  • metrocluster configuration-settings connection check
  • metrocluster configuration-settings connection connect
  • metrocluster configuration-settings connection disconnect
  • metrocluster configuration-settings connection show
  • metrocluster configuration-settings dr-group create
  • metrocluster configuration-settings dr-group delete
  • metrocluster configuration-settings dr-group show
  • metrocluster configuration-settings interface create
  • metrocluster configuration-settings interface delete
  • metrocluster configuration-settings interface show
  • metrocluster interconnect adapter modify
  • metrocluster interconnect adapter show
  • metrocluster interconnect mirror show
  • metrocluster interconnect mirror multipath show
  • metrocluster node show
  • metrocluster operation show
  • metrocluster operation history show
  • metrocluster vserver recover-from-partial-switchback
  • metrocluster vserver recover-from-partial-switchover
  • metrocluster vserver resync
  • metrocluster vserver show
  • network ping
  • network ping6
  • network test-path
  • network traceroute
  • network traceroute6
  • network arp create
  • network arp delete
  • network arp show
  • network arp active-entry delete
  • network arp active-entry show
  • network bgp config create
  • network bgp config delete
  • network bgp config modify
  • network bgp config show
  • network bgp defaults modify
  • network bgp defaults show
  • network bgp peer-group create
  • network bgp peer-group delete
  • network bgp peer-group modify
  • network bgp peer-group rename
  • network bgp peer-group show
  • network bgp vserver-status show
  • network cloud routing-table create
  • network cloud routing-table delete
  • network connections active show-clients
  • network connections active show-lifs
  • network connections active show-protocols
  • network connections active show-services
  • network connections active show
  • network connections listening show
  • network device-discovery show
  • network fcp adapter modify
  • network fcp adapter show
  • network fcp topology show
  • network fcp zone show
  • network interface create
  • network interface delete
  • network interface migrate-all
  • network interface migrate
  • network interface modify
  • network interface rename
  • network interface revert
  • network interface show
  • network interface start-cluster-check
  • network interface capacity show
  • network interface capacity details show
  • network interface check failover show
  • network interface dns-lb-stats show
  • network interface failover-groups add-targets
  • network interface failover-groups create
  • network interface failover-groups delete
  • network interface failover-groups modify
  • network interface failover-groups remove-targets
  • network interface failover-groups rename
  • network interface failover-groups show
  • network interface lif-weights show
  • network interface service show
  • network interface service-policy add-service
  • network interface service-policy clone
  • network interface service-policy create
  • network interface service-policy delete
  • network interface service-policy modify-service
  • network interface service-policy remove-service
  • network interface service-policy rename
  • network interface service-policy restore-defaults
  • network interface service-policy show
  • network ipspace create
  • network ipspace delete
  • network ipspace rename
  • network ipspace show
  • network ndp default-router delete-all
  • network ndp default-router show
  • network ndp neighbor create
  • network ndp neighbor delete
  • network ndp neighbor show
  • network ndp neighbor active-entry delete
  • network ndp neighbor active-entry show
  • network ndp prefix delete-all
  • network ndp prefix show
  • network options cluster-health-notifications modify
  • network options cluster-health-notifications show
  • network options detect-switchless-cluster modify
  • network options detect-switchless-cluster show
  • network options ipv6 modify
  • network options ipv6 show
  • network options load-balancing modify
  • network options load-balancing show
  • network options multipath-routing modify
  • network options multipath-routing show
  • network options port-health-monitor disable-monitors
  • network options port-health-monitor enable-monitors
  • network options port-health-monitor modify
  • network options port-health-monitor show
  • network options send-soa modify
  • network options send-soa show
  • network options switchless-cluster modify
  • network options switchless-cluster show
  • network port delete
  • network port modify
  • network port show-address-filter-info
  • network port show
  • network port broadcast-domain add-ports
  • network port broadcast-domain create
  • network port broadcast-domain delete
  • network port broadcast-domain merge
  • network port broadcast-domain modify
  • network port broadcast-domain remove-ports
  • network port broadcast-domain rename
  • network port broadcast-domain show
  • network port broadcast-domain split
  • network port ifgrp add-port
  • network port ifgrp create
  • network port ifgrp delete
  • network port ifgrp remove-port
  • network port ifgrp show
  • network port vip create
  • network port vip delete
  • network port vip show
  • network port vlan create
  • network port vlan delete
  • network port vlan show
  • network qos-marking modify
  • network qos-marking show
  • network route create
  • network route delete
  • network route show-lifs
  • network route show
  • network route active-entry show
  • network subnet add-ranges
  • network subnet create
  • network subnet delete
  • network subnet modify
  • network subnet remove-ranges
  • network subnet rename
  • network subnet show
  • network tcpdump show
  • network tcpdump start
  • network tcpdump stop
  • network tcpdump trace delete
  • network tcpdump trace show
  • network test-link run-test
  • network test-link show
  • network test-link start-server
  • network test-link stop-server
  • network tuning icmp modify
  • network tuning icmp show
  • network tuning icmp6 modify
  • network tuning icmp6 show
  • network tuning tcp modify
  • network tuning tcp show
  • protection-type show
  • qos adaptive-policy-group create
  • qos adaptive-policy-group delete
  • qos adaptive-policy-group modify
  • qos adaptive-policy-group rename
  • qos adaptive-policy-group show
  • qos policy-group create
  • qos policy-group delete
  • qos policy-group modify
  • qos policy-group rename
  • qos policy-group show
  • qos settings cache modify
  • qos settings cache show
  • qos statistics characteristics show
  • qos statistics latency show
  • qos statistics performance show
  • qos statistics resource cpu show
  • qos statistics resource disk show
  • qos statistics volume characteristics show
  • qos statistics volume latency show
  • qos statistics volume performance show
  • qos statistics volume resource cpu show
  • qos statistics volume resource disk show
  • qos statistics workload characteristics show
  • qos statistics workload latency show
  • qos statistics workload performance show
  • qos statistics workload resource cpu show
  • qos statistics workload resource disk show
  • qos workload show
  • security snmpusers
  • security audit modify
  • security audit show
  • security audit log show
  • security certificate create
  • security certificate delete
  • security certificate generate-csr
  • security certificate install
  • security certificate rename
  • security certificate show-generated
  • security certificate show-truststore
  • security certificate show-user-installed
  • security certificate show
  • security certificate sign
  • security certificate ca-issued revoke
  • security certificate ca-issued show
  • security certificate truststore show
  • security config modify
  • security config show
  • security config ocsp disable
  • security config ocsp enable
  • security config ocsp show
  • security config status show
  • security key-manager add
  • security key-manager create-key
  • security key-manager delete-key-database
  • security key-manager delete-kmip-config
  • security key-manager delete
  • security key-manager prepare-to-downgrade
  • security key-manager query
  • security key-manager restore
  • security key-manager setup
  • security key-manager show-key-store
  • security key-manager show
  • security key-manager update-passphrase
  • security key-manager backup show
  • security key-manager config modify
  • security key-manager config show
  • security key-manager external add-servers
  • security key-manager external disable
  • security key-manager external enable
  • security key-manager external modify-server
  • security key-manager external modify
  • security key-manager external remove-servers
  • security key-manager external restore
  • security key-manager external show-status
  • security key-manager external show
  • security key-manager external boot-interfaces modify
  • security key-manager external boot-interfaces show
  • security key-manager key create
  • security key-manager key delete
  • security key-manager key migrate
  • security key-manager key query
  • security key-manager key show
  • security key-manager onboard disable
  • security key-manager onboard enable
  • security key-manager onboard show-backup
  • security key-manager onboard sync
  • security key-manager onboard update-passphrase
  • security login create
  • security login delete
  • security login expire-password
  • security login lock
  • security login modify
  • security login password-prepare-to-downgrade
  • security login password
  • security login show
  • security login unlock
  • security login whoami
  • security login banner modify
  • security login banner show
  • security login domain-tunnel create
  • security login domain-tunnel delete
  • security login domain-tunnel modify
  • security login domain-tunnel show
  • security login motd modify
  • security login motd show
  • security login publickey create
  • security login publickey delete
  • security login publickey load-from-uri
  • security login publickey modify
  • security login publickey show
  • security login rest-role create
  • security login rest-role delete
  • security login rest-role modify
  • security login rest-role show
  • security login role create
  • security login role delete
  • security login role modify
  • security login role prepare-to-downgrade
  • security login role show-ontapi
  • security login role show
  • security login role config modify
  • security login role config reset
  • security login role config show
  • security protocol modify
  • security protocol show
  • security protocol ssh modify
  • security protocol ssh show
  • security saml-sp create
  • security saml-sp delete
  • security saml-sp modify
  • security saml-sp repair
  • security saml-sp show
  • security saml-sp status show
  • security session kill-cli
  • security session show
  • security session limit create
  • security session limit delete
  • security session limit modify
  • security session limit show
  • security session limit application create
  • security session limit application delete
  • security session limit application modify
  • security session limit application show
  • security session limit location create
  • security session limit location delete
  • security session limit location modify
  • security session limit location show
  • security session limit request create
  • security session limit request delete
  • security session limit request modify
  • security session limit request show
  • security session limit user create
  • security session limit user delete
  • security session limit user modify
  • security session limit user show
  • security session limit vserver create
  • security session limit vserver delete
  • security session limit vserver modify
  • security session limit vserver show
  • security session request-statistics show-by-application
  • security session request-statistics show-by-location
  • security session request-statistics show-by-request
  • security session request-statistics show-by-user
  • security session request-statistics show-by-vserver
  • security ssh add
  • security ssh modify
  • security ssh prepare-to-downgrade
  • security ssh remove
  • security ssh show
  • security ssl modify
  • security ssl show
  • snaplock compliance-clock initialize
  • snaplock compliance-clock show
  • snaplock compliance-clock ntp modify
  • snaplock compliance-clock ntp show
  • snaplock event-retention abort
  • snaplock event-retention apply
  • snaplock event-retention show-vservers
  • snaplock event-retention show
  • snaplock event-retention policy create
  • snaplock event-retention policy delete
  • snaplock event-retention policy modify
  • snaplock event-retention policy show
  • snaplock legal-hold abort
  • snaplock legal-hold begin
  • snaplock legal-hold dump-files
  • snaplock legal-hold dump-litigations
  • snaplock legal-hold end
  • snaplock legal-hold show
  • snaplock log create
  • snaplock log delete
  • snaplock log modify
  • snaplock log show
  • snaplock log file archive
  • snaplock log file show
  • snapmirror abort
  • snapmirror break
  • snapmirror create
  • snapmirror delete
  • snapmirror initialize-ls-set
  • snapmirror initialize
  • snapmirror list-destinations
  • snapmirror modify
  • snapmirror promote
  • snapmirror protect
  • snapmirror quiesce
  • snapmirror release
  • snapmirror restore
  • snapmirror resume
  • snapmirror resync
  • snapmirror set-options
  • snapmirror show-history
  • snapmirror show
  • snapmirror update-ls-set
  • snapmirror update
  • snapmirror config-replication cluster-storage-configuration modify
  • snapmirror config-replication cluster-storage-configuration show
  • snapmirror config-replication status show-aggregate-eligibility
  • snapmirror config-replication status show-communication
  • snapmirror config-replication status show
  • snapmirror object-store profiler abort
  • snapmirror object-store profiler show
  • snapmirror object-store profiler start
  • snapmirror policy add-rule
  • snapmirror policy create
  • snapmirror policy delete
  • snapmirror policy modify-rule
  • snapmirror policy modify
  • snapmirror policy remove-rule
  • snapmirror policy show
  • snapmirror snapshot-owner create
  • snapmirror snapshot-owner delete
  • snapmirror snapshot-owner show
  • statistics show-periodic
  • statistics show
  • statistics start
  • statistics stop
  • statistics aggregate show
  • statistics cache flash-pool show
  • statistics catalog counter show
  • statistics catalog instance show
  • statistics catalog object show
  • statistics disk show
  • statistics lif show
  • statistics lun show
  • statistics namespace show
  • statistics nfs show-mount
  • statistics nfs show-nlm
  • statistics nfs show-statusmon
  • statistics nfs show-v3
  • statistics nfs show-v4
  • statistics node show
  • statistics oncrpc show-rpc-calls
  • statistics port fcp show
  • statistics preset delete
  • statistics preset modify
  • statistics preset show
  • statistics preset detail show
  • statistics qtree show
  • statistics samples delete
  • statistics samples show
  • statistics settings modify
  • statistics settings show
  • statistics system show
  • statistics top client show
  • statistics top file show
  • statistics volume show
  • statistics vserver show
  • statistics workload show
  • statistics-v1 nfs show-mount
  • statistics-v1 nfs show-nlm
  • statistics-v1 nfs show-statusmon
  • statistics-v1 nfs show-v3
  • statistics-v1 nfs show-v4
  • statistics-v1 protocol-request-size show
  • storage-service show
  • storage aggregate add-disks
  • storage aggregate auto-provision
  • storage aggregate create
  • storage aggregate delete
  • storage aggregate mirror
  • storage aggregate modify
  • storage aggregate offline
  • storage aggregate online
  • storage aggregate remove-stale-record
  • storage aggregate rename
  • storage aggregate restrict
  • storage aggregate scrub
  • storage aggregate show-auto-provision-progress
  • storage aggregate show-cumulated-efficiency
  • storage aggregate show-efficiency
  • storage aggregate show-resync-status
  • storage aggregate show-scrub-status
  • storage aggregate show-space
  • storage aggregate show-spare-disks
  • storage aggregate show-status
  • storage aggregate show
  • storage aggregate verify
  • storage aggregate efficiency show
  • storage aggregate efficiency cross-volume-dedupe revert-to
  • storage aggregate efficiency cross-volume-dedupe show
  • storage aggregate efficiency cross-volume-dedupe start
  • storage aggregate efficiency cross-volume-dedupe stop
  • storage aggregate encryption show-key-id
  • storage aggregate inode-upgrade resume
  • storage aggregate inode-upgrade show
  • storage aggregate object-store attach
  • storage aggregate object-store modify
  • storage aggregate object-store show-freeing-status
  • storage aggregate object-store show-space
  • storage aggregate object-store show
  • storage aggregate object-store config create
  • storage aggregate object-store config delete
  • storage aggregate object-store config modify
  • storage aggregate object-store config rename
  • storage aggregate object-store config show
  • storage aggregate object-store profiler abort
  • storage aggregate object-store profiler show
  • storage aggregate object-store profiler start
  • storage aggregate plex delete
  • storage aggregate plex offline
  • storage aggregate plex online
  • storage aggregate plex show
  • storage aggregate reallocation quiesce
  • storage aggregate reallocation restart
  • storage aggregate reallocation schedule
  • storage aggregate reallocation show
  • storage aggregate reallocation start
  • storage aggregate reallocation stop
  • storage aggregate relocation show
  • storage aggregate relocation start
  • storage aggregate resynchronization modify
  • storage aggregate resynchronization show
  • storage aggregate resynchronization options modify
  • storage aggregate resynchronization options show
  • storage array modify
  • storage array remove
  • storage array rename
  • storage array show
  • storage array config show
  • storage array disk paths show
  • storage array port modify
  • storage array port remove
  • storage array port show
  • storage automated-working-set-analyzer show
  • storage automated-working-set-analyzer start
  • storage automated-working-set-analyzer stop
  • storage automated-working-set-analyzer volume show
  • storage bridge add
  • storage bridge modify
  • storage bridge refresh
  • storage bridge remove
  • storage bridge run-cli
  • storage bridge show
  • storage bridge config-dump collect
  • storage bridge config-dump delete
  • storage bridge config-dump show
  • storage bridge coredump collect
  • storage bridge coredump delete
  • storage bridge coredump show
  • storage bridge firmware update

storage disk assign

  • storage disk fail
  • storage disk reassign
  • storage disk refresh-ownership
  • storage disk remove-reservation
  • storage disk remove
  • storage disk removeowner
  • storage disk replace
  • storage disk set-foreign-lun
  • storage disk set-led
  • storage disk show
  • storage disk unfail
  • storage disk updatefirmware
  • storage disk zerospares
  • storage disk error show
  • storage disk firmware revert
  • storage disk firmware show-update-status
  • storage disk firmware update
  • storage disk option modify
  • storage disk option show
  • storage encryption disk destroy
  • storage encryption disk modify
  • storage encryption disk revert-to-original-state
  • storage encryption disk sanitize
  • storage encryption disk show-status
  • storage encryption disk show
  • storage errors show
  • storage failover giveback
  • storage failover modify
  • storage failover show-giveback
  • storage failover show-takeover
  • storage failover show
  • storage failover takeover
  • storage failover hwassist show
  • storage failover hwassist test
  • storage failover hwassist stats clear
  • storage failover hwassist stats show
  • storage failover internal-options show
  • storage failover mailbox-disk show
  • storage failover progress-table show
  • storage firmware download
  • storage firmware acp delete
  • storage firmware acp rename
  • storage firmware acp show
  • storage firmware disk delete
  • storage firmware disk rename
  • storage firmware disk show
  • storage firmware shelf delete
  • storage firmware shelf rename
  • storage firmware shelf show
  • storage iscsi-initiator add-target
  • storage iscsi-initiator connect
  • storage iscsi-initiator disconnect
  • storage iscsi-initiator remove-target
  • storage iscsi-initiator show
  • storage load balance
  • storage load show
  • storage path quiesce
  • storage path resume
  • storage path show-by-initiator
  • storage path show
  • storage pool add
  • storage pool create
  • storage pool delete
  • storage pool reassign
  • storage pool rename
  • storage pool show-aggregate
  • storage pool show-available-capacity
  • storage pool show-disks
  • storage pool show
  • storage port disable
  • storage port enable
  • storage port rescan
  • storage port reset-device
  • storage port reset
  • storage port show
  • storage raid-options modify
  • storage raid-options show
  • storage shelf show
  • storage shelf acp configure
  • storage shelf acp show
  • storage shelf acp module show
  • storage shelf drawer show-phy
  • storage shelf drawer show-slot
  • storage shelf drawer show
  • storage shelf firmware show-update-status
  • storage shelf firmware update
  • storage shelf location-led modify
  • storage shelf location-led show
  • storage shelf port show
  • storage switch add
  • storage switch modify
  • storage switch refresh
  • storage switch remove
  • storage switch show
  • storage tape offline
  • storage tape online
  • storage tape position
  • storage tape reset
  • storage tape show-errors
  • storage tape show-media-changer
  • storage tape show-supported-status
  • storage tape show-tape-drive
  • storage tape show
  • storage tape trace
  • storage tape alias clear
  • storage tape alias set
  • storage tape alias show
  • storage tape config-file delete
  • storage tape config-file get
  • storage tape config-file show
  • storage tape library config show
  • storage tape library path show-by-initiator
  • storage tape library path show
  • storage tape load-balance modify
  • storage tape load-balance show
  • system chassis show
  • system chassis fru show
  • system cluster-switch configure-health-monitor
  • system cluster-switch create
  • system cluster-switch delete
  • system cluster-switch modify
  • system cluster-switch prepare-to-downgrade
  • system cluster-switch show-all
  • system cluster-switch show
  • system cluster-switch log collect
  • system cluster-switch log disable-collection
  • system cluster-switch log enable-collection
  • system cluster-switch log modify
  • system cluster-switch log setup-password
  • system cluster-switch log show
  • system cluster-switch polling-interval modify
  • system cluster-switch polling-interval show
  • system cluster-switch threshold show
  • system configuration backup copy
  • system configuration backup create
  • system configuration backup delete
  • system configuration backup download
  • system configuration backup rename
  • system configuration backup show
  • system configuration backup upload
  • system configuration backup settings modify
  • system configuration backup settings set-password
  • system configuration backup settings show
  • system configuration recovery cluster modify
  • system configuration recovery cluster recreate
  • system configuration recovery cluster rejoin
  • system configuration recovery cluster show
  • system configuration recovery cluster sync
  • system configuration recovery node restore
  • system controller show
  • system controller bootmedia show-serial-number
  • system controller bootmedia show
  • system controller clus-flap-threshold show
  • system controller config show-errors
  • system controller config show
  • system controller config pci show-add-on-devices
  • system controller config pci show-hierarchy
  • system controller environment show
  • system controller flash-cache show
  • system controller flash-cache secure-erase run
  • system controller flash-cache secure-erase show
  • system controller fru show-manufacturing-info
  • system controller fru show
  • system controller fru led disable-all
  • system controller fru led enable-all
  • system controller fru led modify
  • system controller fru led show
  • system controller ioxm show
  • system controller location-led modify
  • system controller location-led show
  • system controller memory dimm show
  • system controller nvram-bb-threshold show
  • system controller pci show
  • system controller pcicerr threshold modify
  • system controller pcicerr threshold show
  • system controller platform-capability show
  • system controller replace cancel
  • system controller replace pause
  • system controller replace resume
  • system controller replace show-details
  • system controller replace show
  • system controller replace start
  • system controller service-event delete
  • system controller service-event show
  • system controller slot module insert
  • system controller slot module remove
  • system controller slot module replace
  • system controller slot module show
  • system controller sp config show
  • system controller sp upgrade show
  • system feature-usage show-history
  • system feature-usage show-summary
  • system fru-check show
  • system ha interconnect config show
  • system ha interconnect link off
  • system ha interconnect link on
  • system ha interconnect ood clear-error-statistics
  • system ha interconnect ood clear-performance-statistics
  • system ha interconnect ood disable-optimization
  • system ha interconnect ood disable-statistics
  • system ha interconnect ood enable-optimization
  • system ha interconnect ood enable-statistics
  • system ha interconnect ood send-diagnostic-buffer
  • system ha interconnect ood status show
  • system ha interconnect port show
  • system ha interconnect statistics clear-port-symbol-error
  • system ha interconnect statistics clear-port
  • system ha interconnect statistics show-scatter-gather-list
  • system ha interconnect statistics performance show
  • system ha interconnect status show
  • system health alert delete
  • system health alert modify
  • system health alert show
  • system health alert definition show
  • system health autosupport trigger history show
  • system health config show
  • system health policy definition modify
  • system health policy definition show
  • system health status show
  • system health subsystem show
  • system license add
  • system license clean-up
  • system license delete
  • system license show-aggregates
  • system license show-status
  • system license show
  • system license update-leases
  • system license capacity show
  • system license entitlement-risk show
  • system license license-manager check
  • system license license-manager modify
  • system license license-manager show
  • system license status show
  • system node halt
  • system node migrate-root
  • system node modify
  • system node reboot
  • system node rename
  • system node restore-backup
  • system node revert-to
  • system node run-console
  • system node run
  • system node show-discovered
  • system node show
  • system node autosupport invoke-core-upload
  • system node autosupport invoke-performance-archive
  • system node autosupport invoke-splog
  • system node autosupport invoke
  • system node autosupport modify
  • system node autosupport show
  • system node autosupport check show-details
  • system node autosupport check show
  • system node autosupport destinations show
  • system node autosupport history cancel
  • system node autosupport history retransmit
  • system node autosupport history show-upload-details
  • system node autosupport history show
  • system node autosupport manifest show
  • system node autosupport trigger modify
  • system node autosupport trigger show
  • system node coredump delete-all
  • system node coredump delete
  • system node coredump save-all
  • system node coredump save
  • system node coredump show
  • system node coredump status
  • system node coredump trigger
  • system node coredump upload
  • system node coredump config modify
  • system node coredump config show
  • system node coredump external-device save
  • system node coredump external-device show
  • system node coredump reports delete
  • system node coredump reports show
  • system node coredump reports upload
  • system node coredump segment delete-all
  • system node coredump segment delete
  • system node coredump segment show
  • system node environment sensors show
  • system node external-cache modify
  • system node external-cache show
  • system node firmware download
  • system node hardware nvram-encryption modify
  • system node hardware nvram-encryption show
  • system node hardware tape drive show
  • system node hardware tape library show
  • system node hardware unified-connect modify
  • system node hardware unified-connect show
  • system node image abort-operation
  • system node image get
  • system node image modify
  • system node image show-update-progress
  • system node image show
  • system node image update
  • system node image package delete
  • system node image package show
  • system node image package external-device delete
  • system node image package external-device show
  • system node internal-switch show
  • system node internal-switch dump stat
  • system node nfs usage show
  • system node power on
  • system node power show
  • system node root-mount create
  • system node root-mount delete
  • system node root-mount show
  • system node upgrade-revert show
  • system node upgrade-revert upgrade
  • system node virtual-machine show-network-load-balancer
  • system node virtual-machine disk-object-store create
  • system node virtual-machine disk-object-store delete
  • system node virtual-machine disk-object-store modify
  • system node virtual-machine disk-object-store show
  • system node virtual-machine hypervisor modify-credentials
  • system node virtual-machine hypervisor show-credentials
  • system node virtual-machine hypervisor show
  • system node virtual-machine instance show-system-disks
  • system node virtual-machine instance show
  • system script delete
  • system script show
  • system script start
  • system script stop
  • system script upload
  • system service-processor reboot-sp
  • system service-processor show
  • system service-processor api-service disable-installed-certificates
  • system service-processor api-service enable-installed-certificates
  • system service-processor api-service modify
  • system service-processor api-service renew-internal-certificates
  • system service-processor api-service show
  • system service-processor image modify
  • system service-processor image show
  • system service-processor image update
  • system service-processor image update-progress show
  • system service-processor log show-allocations
  • system service-processor network modify
  • system service-processor network show
  • system service-processor network auto-configuration disable
  • system service-processor network auto-configuration enable
  • system service-processor network auto-configuration show
  • system service-processor ssh add-allowed-addresses
  • system service-processor ssh remove-allowed-addresses
  • system service-processor ssh show
  • system services firewall modify
  • system services firewall show
  • system services firewall policy clone
  • system services firewall policy create
  • system services firewall policy delete
  • system services firewall policy modify
  • system services firewall policy show
  • system services manager install show
  • system services manager policy add
  • system services manager policy remove
  • system services manager policy setstate
  • system services manager policy show
  • system services manager status show
  • system services ndmp kill-all
  • system services ndmp kill
  • system services ndmp modify
  • system services ndmp off
  • system services ndmp on
  • system services ndmp password
  • system services ndmp probe
  • system services ndmp show
  • system services ndmp status
  • system services ndmp log start
  • system services ndmp log stop
  • system services ndmp node-scope-mode off
  • system services ndmp node-scope-mode on
  • system services ndmp node-scope-mode status
  • system services ndmp service modify
  • system services ndmp service show
  • system services ndmp service start
  • system services ndmp service stop
  • system services ndmp service terminate
  • system services web modify
  • system services web show
  • system services web node show
  • system smtape abort
  • system smtape backup
  • system smtape break
  • system smtape continue
  • system smtape restore
  • system smtape showheader
  • system smtape status clear
  • system smtape status show
  • system snmp authtrap
  • system snmp contact
  • system snmp enable-snmpv3
  • system snmp init
  • system snmp location
  • system snmp prepare-to-downgrade
  • system snmp show
  • system snmp community add
  • system snmp community delete
  • system snmp community show
  • system snmp traphost add
  • system snmp traphost delete
  • system snmp traphost show
  • system status show
  • system timeout modify
  • system timeout show
  • template copy
  • template delete
  • template download
  • template provision
  • template rename
  • template show-permissions
  • template show
  • template upload
  • template parameter modify
  • template parameter show
  • volume autosize
  • volume create
  • volume delete
  • volume expand
  • volume make-vsroot
  • volume modify
  • volume mount
  • volume offline
  • volume online
  • volume prepare-for-revert
  • volume rehost
  • volume rename
  • volume restrict
  • volume show-footprint
  • volume show-space
  • volume show
  • volume size
  • volume transition-prepare-to-downgrade
  • volume unmount
  • volume clone create
  • volume clone show
  • volume clone sharing-by-split show
  • volume clone sharing-by-split undo show
  • volume clone sharing-by-split undo start-all
  • volume clone sharing-by-split undo start
  • volume clone sharing-by-split undo stop
  • volume clone split estimate
  • volume clone split show
  • volume clone split start
  • volume clone split stop
  • volume efficiency check
  • volume efficiency modify
  • volume efficiency off
  • volume efficiency on
  • volume efficiency prepare-to-downgrade
  • volume efficiency promote
  • volume efficiency revert-to
  • volume efficiency show
  • volume efficiency start
  • volume efficiency stat
  • volume efficiency stop
  • volume efficiency undo
  • volume efficiency policy create
  • volume efficiency policy delete
  • volume efficiency policy modify
  • volume efficiency policy show
  • volume encryption conversion pause
  • volume encryption conversion resume
  • volume encryption conversion show
  • volume encryption conversion start
  • volume encryption rekey pause
  • volume encryption rekey resume
  • volume encryption rekey show
  • volume encryption rekey start
  • volume encryption secure-purge abort
  • volume encryption secure-purge show
  • volume encryption secure-purge start
  • volume file compact-data
  • volume file modify
  • volume file privileged-delete
  • volume file reservation
  • volume file show-disk-usage
  • volume file show-filehandle
  • volume file show-inode
  • volume file clone autodelete
  • volume file clone create
  • volume file clone show-autodelete
  • volume file clone deletion add-extension
  • volume file clone deletion modify
  • volume file clone deletion remove-extension
  • volume file clone deletion show
  • volume file clone split load modify
  • volume file clone split load show
  • volume file fingerprint abort
  • volume file fingerprint dump
  • volume file fingerprint show
  • volume file fingerprint start
  • volume file retention show
  • volume flexcache config-refresh
  • volume flexcache create
  • volume flexcache delete
  • volume flexcache show
  • volume flexcache sync-properties
  • volume flexcache connection-status show
  • volume flexcache origin show-caches
  • volume flexgroup qtree-disable
  • volume inode-upgrade prepare-to-downgrade
  • volume inode-upgrade resume
  • volume inode-upgrade show
  • volume move abort
  • volume move modify
  • volume move show
  • volume move start
  • volume move trigger-cutover
  • volume move recommend show
  • volume move target-aggr show
  • volume qtree create
  • volume qtree delete
  • volume qtree modify
  • volume qtree oplocks
  • volume qtree rename
  • volume qtree security
  • volume qtree show
  • volume qtree statistics-reset
  • volume qtree statistics
  • volume quota modify
  • volume quota off
  • volume quota on
  • volume quota report
  • volume quota resize
  • volume quota show
  • volume quota policy copy
  • volume quota policy create
  • volume quota policy delete
  • volume quota policy rename
  • volume quota policy show
  • volume quota policy rule create
  • volume quota policy rule delete
  • volume quota policy rule modify
  • volume quota policy rule show
  • volume quota policy rule count show
  • volume reallocation measure
  • volume reallocation off
  • volume reallocation on
  • volume reallocation quiesce
  • volume reallocation restart
  • volume reallocation schedule
  • volume reallocation show
  • volume reallocation start
  • volume reallocation stop
  • volume schedule-style prepare-to-downgrade
  • volume snaplock modify
  • volume snaplock prepare-to-downgrade
  • volume snaplock show
  • volume snapshot compute-reclaimable
  • volume snapshot create
  • volume snapshot delete
  • volume snapshot modify-snaplock-expiry-time
  • volume snapshot modify
  • volume snapshot partial-restore-file
  • volume snapshot prepare-for-revert
  • volume snapshot rename
  • volume snapshot restore-file
  • volume snapshot restore
  • volume snapshot show-delta
  • volume snapshot show
  • volume snapshot autodelete modify
  • volume snapshot autodelete show
  • volume snapshot policy add-schedule
  • volume snapshot policy create
  • volume snapshot policy delete
  • volume snapshot policy modify-schedule
  • volume snapshot policy modify
  • volume snapshot policy remove-schedule
  • volume snapshot policy show
  • volume transition-convert-dir show
  • volume transition-convert-dir start
  • vserver add-aggregates
  • vserver add-protocols
  • vserver context
  • vserver create
  • vserver delete
  • vserver modify
  • vserver prepare-for-revert
  • vserver remove-aggregates
  • vserver remove-protocols
  • vserver rename
  • vserver restamp-msid
  • vserver show-aggregates
  • vserver show-protocols
  • vserver show
  • vserver start
  • vserver stop
  • vserver unlock
  • vserver active-directory create
  • vserver active-directory delete
  • vserver active-directory modify
  • vserver active-directory password-change
  • vserver active-directory password-reset
  • vserver active-directory show
  • vserver audit create
  • vserver audit delete
  • vserver audit disable
  • vserver audit enable
  • vserver audit modify
  • vserver audit prepare-to-downgrade
  • vserver audit rotate-log
  • vserver audit show
  • vserver check lif-multitenancy run
  • vserver check lif-multitenancy show-results
  • vserver check lif-multitenancy show
  • vserver cifs add-netbios-aliases
  • vserver cifs check
  • vserver cifs create
  • vserver cifs delete
  • vserver cifs modify
  • vserver cifs nbtstat
  • vserver cifs prepare-to-downgrade
  • vserver cifs remove-netbios-aliases
  • vserver cifs repair-modify
  • vserver cifs show
  • vserver cifs start
  • vserver cifs stop
  • vserver cifs branchcache create
  • vserver cifs branchcache delete
  • vserver cifs branchcache hash-create
  • vserver cifs branchcache hash-flush
  • vserver cifs branchcache modify
  • vserver cifs branchcache show
  • vserver cifs cache name-to-sid delete-all
  • vserver cifs cache name-to-sid delete
  • vserver cifs cache name-to-sid show
  • vserver cifs cache settings modify
  • vserver cifs cache settings show
  • vserver cifs cache sid-to-name delete-all
  • vserver cifs cache sid-to-name delete
  • vserver cifs cache sid-to-name show
  • vserver cifs character-mapping create
  • vserver cifs character-mapping delete
  • vserver cifs character-mapping modify
  • vserver cifs character-mapping show
  • vserver cifs connection show
  • vserver cifs domain discovered-servers reset-servers
  • vserver cifs domain discovered-servers show
  • vserver cifs domain discovered-servers discovery-mode modify
  • vserver cifs domain discovered-servers discovery-mode show
  • vserver cifs domain name-mapping-search add
  • vserver cifs domain name-mapping-search modify
  • vserver cifs domain name-mapping-search remove
  • vserver cifs domain name-mapping-search show
  • vserver cifs domain password change
  • vserver cifs domain password reset
  • vserver cifs domain password schedule modify
  • vserver cifs domain password schedule show
  • vserver cifs domain preferred-dc add
  • vserver cifs domain preferred-dc check
  • vserver cifs domain preferred-dc remove
  • vserver cifs domain preferred-dc show
  • vserver cifs domain trusts rediscover
  • vserver cifs domain trusts show
  • vserver cifs group-policy modify
  • vserver cifs group-policy show-applied
  • vserver cifs group-policy show-defined
  • vserver cifs group-policy show
  • vserver cifs group-policy update
  • vserver cifs group-policy central-access-policy show-applied
  • vserver cifs group-policy central-access-policy show-defined
  • vserver cifs group-policy central-access-rule show-applied
  • vserver cifs group-policy central-access-rule show-defined
  • vserver cifs group-policy restricted-group show-applied
  • vserver cifs group-policy restricted-group show-defined
  • vserver cifs home-directory modify
  • vserver cifs home-directory show-user
  • vserver cifs home-directory show
  • vserver cifs home-directory search-path add
  • vserver cifs home-directory search-path remove
  • vserver cifs home-directory search-path reorder
  • vserver cifs home-directory search-path show
  • vserver cifs options modify
  • vserver cifs options show
  • vserver cifs security modify
  • vserver cifs security show
  • vserver cifs session close
  • vserver cifs session show
  • vserver cifs session file close
  • vserver cifs session file show
  • vserver cifs share create
  • vserver cifs share delete
  • vserver cifs share modify
  • vserver cifs share show
  • vserver cifs share access-control create
  • vserver cifs share access-control delete
  • vserver cifs share access-control modify
  • vserver cifs share access-control show
  • vserver cifs share properties add
  • vserver cifs share properties remove
  • vserver cifs share properties show
  • vserver cifs superuser create
  • vserver cifs superuser delete
  • vserver cifs superuser show
  • vserver cifs symlink create
  • vserver cifs symlink delete
  • vserver cifs symlink modify
  • vserver cifs symlink show
  • vserver cifs users-and-groups remove-stale-records
  • vserver cifs users-and-groups update-names
  • vserver cifs users-and-groups local-group add-members
  • vserver cifs users-and-groups local-group create
  • vserver cifs users-and-groups local-group delete
  • vserver cifs users-and-groups local-group modify
  • vserver cifs users-and-groups local-group remove-members
  • vserver cifs users-and-groups local-group rename
  • vserver cifs users-and-groups local-group show-members
  • vserver cifs users-and-groups local-group show
  • vserver cifs users-and-groups local-user create
  • vserver cifs users-and-groups local-user delete
  • vserver cifs users-and-groups local-user modify
  • vserver cifs users-and-groups local-user rename
  • vserver cifs users-and-groups local-user set-password
  • vserver cifs users-and-groups local-user show-membership
  • vserver cifs users-and-groups local-user show
  • vserver cifs users-and-groups privilege add-privilege
  • vserver cifs users-and-groups privilege remove-privilege
  • vserver cifs users-and-groups privilege reset-privilege
  • vserver cifs users-and-groups privilege show
  • vserver config-replication pause
  • vserver config-replication resume
  • vserver config-replication show
  • vserver export-policy check-access
  • vserver export-policy copy
  • vserver export-policy create
  • vserver export-policy delete
  • vserver export-policy rename
  • vserver export-policy show
  • vserver export-policy access-cache flush
  • vserver export-policy access-cache show-rules
  • vserver export-policy access-cache show
  • vserver export-policy access-cache config modify-all-vservers
  • vserver export-policy access-cache config modify
  • vserver export-policy access-cache config show-all-vservers
  • vserver export-policy access-cache config show
  • vserver export-policy cache flush
  • vserver export-policy config-checker show
  • vserver export-policy config-checker start
  • vserver export-policy config-checker stop
  • vserver export-policy config-checker rule delete
  • vserver export-policy config-checker rule show
  • vserver export-policy netgroup check-membership
  • vserver export-policy netgroup cache show
  • vserver export-policy netgroup queue show
  • vserver export-policy rule add-clientmatches
  • vserver export-policy rule create
  • vserver export-policy rule delete
  • vserver export-policy rule modify
  • vserver export-policy rule remove-clientmatches
  • vserver export-policy rule setindex
  • vserver export-policy rule show
  • vserver fcp create
  • vserver fcp delete
  • vserver fcp modify
  • vserver fcp show
  • vserver fcp start
  • vserver fcp stop
  • vserver fcp initiator show
  • vserver fcp interface show
  • vserver fcp nameserver show
  • vserver fcp ping-igroup show
  • vserver fcp ping-initiator show
  • vserver fcp portname set
  • vserver fcp portname show
  • vserver fcp topology show
  • vserver fcp wwn blacklist show
  • vserver fcp wwpn-alias remove
  • vserver fcp wwpn-alias set
  • vserver fcp wwpn-alias show
  • vserver fpolicy disable
  • vserver fpolicy enable
  • vserver fpolicy engine-connect
  • vserver fpolicy engine-disconnect
  • vserver fpolicy show-enabled
  • vserver fpolicy show-engine
  • vserver fpolicy show-passthrough-read-connection
  • vserver fpolicy show
  • vserver fpolicy policy create
  • vserver fpolicy policy delete
  • vserver fpolicy policy modify
  • vserver fpolicy policy show
  • vserver fpolicy policy event create
  • vserver fpolicy policy event delete
  • vserver fpolicy policy event modify
  • vserver fpolicy policy event show
  • vserver fpolicy policy external-engine create
  • vserver fpolicy policy external-engine delete
  • vserver fpolicy policy external-engine modify
  • vserver fpolicy policy external-engine show
  • vserver fpolicy policy scope create
  • vserver fpolicy policy scope delete
  • vserver fpolicy policy scope modify
  • vserver fpolicy policy scope show
  • vserver iscsi create
  • vserver iscsi delete
  • vserver iscsi modify
  • vserver iscsi show
  • vserver iscsi start
  • vserver iscsi stop
  • vserver iscsi command show
  • vserver iscsi connection show
  • vserver iscsi connection shutdown
  • vserver iscsi initiator show
  • vserver iscsi interface disable
  • vserver iscsi interface enable
  • vserver iscsi interface modify
  • vserver iscsi interface show
  • vserver iscsi interface accesslist add
  • vserver iscsi interface accesslist remove
  • vserver iscsi interface accesslist show
  • vserver iscsi isns create
  • vserver iscsi isns delete
  • vserver iscsi isns modify
  • vserver iscsi isns show
  • vserver iscsi isns start
  • vserver iscsi isns stop
  • vserver iscsi isns update
  • vserver iscsi security add-initator-address-ranges
  • vserver iscsi security create
  • vserver iscsi security default
  • vserver iscsi security delete
  • vserver iscsi security modify
  • vserver iscsi security prepare-to-downgrade
  • vserver iscsi security remove-initator-address-ranges
  • vserver iscsi security show
  • vserver iscsi session show
  • vserver iscsi session shutdown
  • vserver iscsi session parameter show
  • vserver locks break
  • vserver locks show
  • vserver name-mapping create
  • vserver name-mapping delete
  • vserver name-mapping insert
  • vserver name-mapping modify
  • vserver name-mapping refresh-hostname-ip
  • vserver name-mapping show
  • vserver name-mapping swap
  • vserver nfs create
  • vserver nfs delete
  • vserver nfs modify
  • vserver nfs off
  • vserver nfs on
  • vserver nfs prepare-for-v3-ms-dos-client-downgrade
  • vserver nfs prepare-to-downgrade
  • vserver nfs show
  • vserver nfs start
  • vserver nfs status
  • vserver nfs stop
  • vserver nfs credentials count
  • vserver nfs credentials flush
  • vserver nfs credentials show
  • vserver nfs kerberos interface disable
  • vserver nfs kerberos interface enable
  • vserver nfs kerberos interface modify
  • vserver nfs kerberos interface show
  • vserver nfs kerberos realm create
  • vserver nfs kerberos realm delete
  • vserver nfs kerberos realm modify
  • vserver nfs kerberos realm show
  • vserver nfs pnfs devices create
  • vserver nfs pnfs devices delete
  • vserver nfs pnfs devices show
  • vserver nfs pnfs devices cache show
  • vserver nfs pnfs devices mappings show
  • vserver nvme create
  • vserver nvme delete
  • vserver nvme modify
  • vserver nvme show-interface
  • vserver nvme show
  • vserver nvme feature show
  • vserver nvme namespace create
  • vserver nvme namespace delete
  • vserver nvme namespace modify
  • vserver nvme namespace show
  • vserver nvme subsystem create
  • vserver nvme subsystem delete
  • vserver nvme subsystem modify
  • vserver nvme subsystem show
  • vserver nvme subsystem controller show
  • vserver nvme subsystem host add
  • vserver nvme subsystem host remove
  • vserver nvme subsystem host show
  • vserver nvme subsystem map add
  • vserver nvme subsystem map remove
  • vserver nvme subsystem map show
  • vserver peer accept
  • vserver peer create
  • vserver peer delete
  • vserver peer modify-local-name
  • vserver peer modify
  • vserver peer reject
  • vserver peer repair-peer-name
  • vserver peer resume
  • vserver peer show-all
  • vserver peer show
  • vserver peer suspend
  • vserver peer permission create
  • vserver peer permission delete
  • vserver peer permission modify
  • vserver peer permission show
  • vserver peer transition create
  • vserver peer transition delete
  • vserver peer transition modify
  • vserver peer transition show
  • vserver san prepare-to-downgrade
  • vserver security file-directory apply
  • vserver security file-directory remove-slag
  • vserver security file-directory show-effective-permissions
  • vserver security file-directory show
  • vserver security file-directory job show
  • vserver security file-directory ntfs create
  • vserver security file-directory ntfs delete
  • vserver security file-directory ntfs modify
  • vserver security file-directory ntfs show
  • vserver security file-directory ntfs dacl add
  • vserver security file-directory ntfs dacl modify
  • vserver security file-directory ntfs dacl remove
  • vserver security file-directory ntfs dacl show
  • vserver security file-directory ntfs sacl add
  • vserver security file-directory ntfs sacl modify
  • vserver security file-directory ntfs sacl remove
  • vserver security file-directory ntfs sacl show
  • vserver security file-directory policy create
  • vserver security file-directory policy delete
  • vserver security file-directory policy show
  • vserver security file-directory policy task add
  • vserver security file-directory policy task modify
  • vserver security file-directory policy task remove
  • vserver security file-directory policy task show
  • vserver security trace filter create
  • vserver security trace filter delete
  • vserver security trace filter modify
  • vserver security trace filter show
  • vserver security trace trace-result delete
  • vserver security trace trace-result show
  • vserver services access-check authentication get-claim-name
  • vserver services access-check authentication get-dc-info
  • vserver services access-check authentication login-cifs
  • vserver services access-check authentication ontap-admin-login-cifs
  • vserver services access-check authentication show-creds
  • vserver services access-check authentication show-ontap-admin-unix-creds
  • vserver services access-check authentication sid-to-uid
  • vserver services access-check authentication sid-to-unix-name
  • vserver services access-check authentication translate
  • vserver services access-check authentication uid-to-sid
  • vserver services access-check dns forward-lookup
  • vserver services access-check dns srv-lookup
  • vserver services access-check name-mapping show
  • vserver services access-check server-discovery reset
  • vserver services access-check server-discovery show-host
  • vserver services access-check server-discovery show-site
  • vserver services access-check server-discovery test
  • vserver services name-service cache group-membership delete-all
  • vserver services name-service cache group-membership delete
  • vserver services name-service cache group-membership show
  • vserver services name-service cache group-membership settings modify
  • vserver services name-service cache group-membership settings show
  • vserver services name-service cache hosts forward-lookup delete-all
  • vserver services name-service cache hosts forward-lookup delete
  • vserver services name-service cache hosts forward-lookup show
  • vserver services name-service cache hosts reverse-lookup delete-all
  • vserver services name-service cache hosts reverse-lookup delete
  • vserver services name-service cache hosts reverse-lookup show
  • vserver services name-service cache hosts settings modify
  • vserver services name-service cache hosts settings show
  • vserver services name-service cache netgroups ip-to-netgroup delete-all
  • vserver services name-service cache netgroups ip-to-netgroup delete
  • vserver services name-service cache netgroups ip-to-netgroup show
  • vserver services name-service cache netgroups members delete-all
  • vserver services name-service cache netgroups members delete
  • vserver services name-service cache netgroups members show
  • vserver services name-service cache netgroups settings modify
  • vserver services name-service cache netgroups settings show
  • vserver services name-service cache settings modify
  • vserver services name-service cache settings show
  • vserver services name-service cache unix-group group-by-gid delete-all
  • vserver services name-service cache unix-group group-by-gid delete
  • vserver services name-service cache unix-group group-by-gid show
  • vserver services name-service cache unix-group group-by-name delete-all
  • vserver services name-service cache unix-group group-by-name delete
  • vserver services name-service cache unix-group group-by-name show
  • vserver services name-service cache unix-group settings modify
  • vserver services name-service cache unix-group settings show
  • vserver services name-service cache unix-user settings modify
  • vserver services name-service cache unix-user settings show
  • vserver services name-service cache unix-user user-by-id delete-all
  • vserver services name-service cache unix-user user-by-id delete
  • vserver services name-service cache unix-user user-by-id show
  • vserver services name-service cache unix-user user-by-name delete-all
  • vserver services name-service cache unix-user user-by-name delete
  • vserver services name-service cache unix-user user-by-name show
  • vserver services name-service dns check
  • vserver services name-service dns create
  • vserver services name-service dns delete
  • vserver services name-service dns modify
  • vserver services name-service dns show
  • vserver services name-service dns dynamic-update modify
  • vserver services name-service dns dynamic-update prepare-to-downgrade
  • vserver services name-service dns dynamic-update show
  • vserver services name-service dns dynamic-update record add
  • vserver services name-service dns dynamic-update record delete
  • vserver services name-service dns hosts create
  • vserver services name-service dns hosts delete
  • vserver services name-service dns hosts modify
  • vserver services name-service dns hosts show
  • vserver services name-service getxxbyyy getaddrinfo
  • vserver services name-service getxxbyyy getgrbygid
  • vserver services name-service getxxbyyy getgrbyname
  • vserver services name-service getxxbyyy getgrlist
  • vserver services name-service getxxbyyy gethostbyaddr
  • vserver services name-service getxxbyyy gethostbyname
  • vserver services name-service getxxbyyy getnameinfo
  • vserver services name-service getxxbyyy getpwbyname
  • vserver services name-service getxxbyyy getpwbyuid
  • vserver services name-service getxxbyyy netgrpcheck
  • vserver services name-service ldap check
  • vserver services name-service ldap create
  • vserver services name-service ldap delete
  • vserver services name-service ldap modify
  • vserver services name-service ldap show
  • vserver services name-service ldap client create
  • vserver services name-service ldap client delete
  • vserver services name-service ldap client modify-bind-password
  • vserver services name-service ldap client modify
  • vserver services name-service ldap client show
  • vserver services name-service ldap client schema copy
  • vserver services name-service ldap client schema delete
  • vserver services name-service ldap client schema modify
  • vserver services name-service ldap client schema show
  • vserver services name-service netgroup load
  • vserver services name-service netgroup status
  • vserver services name-service netgroup file delete
  • vserver services name-service netgroup file show
  • vserver services name-service nis-domain create
  • vserver services name-service nis-domain delete
  • vserver services name-service nis-domain modify
  • vserver services name-service nis-domain show-bound
  • vserver services name-service nis-domain show
  • vserver services name-service nis-domain group-database build
  • vserver services name-service nis-domain group-database status
  • vserver services name-service ns-switch create
  • vserver services name-service ns-switch delete
  • vserver services name-service ns-switch modify
  • vserver services name-service ns-switch show
  • vserver services name-service unix-group adduser
  • vserver services name-service unix-group create
  • vserver services name-service unix-group delete
  • vserver services name-service unix-group deluser
  • vserver services name-service unix-group load-from-uri
  • vserver services name-service unix-group modify
  • vserver services name-service unix-group show
  • vserver services name-service unix-group file show
  • vserver services name-service unix-group file status
  • vserver services name-service unix-group max-limit modify
  • vserver services name-service unix-group max-limit show
  • vserver services name-service unix-user create
  • vserver services name-service unix-user delete
  • vserver services name-service unix-user load-from-uri
  • vserver services name-service unix-user modify
  • vserver services name-service unix-user show
  • vserver services name-service unix-user file show
  • vserver services name-service unix-user file status
  • vserver services name-service unix-user max-limit modify
  • vserver services name-service unix-user max-limit show
  • vserver services name-service ypbind start
  • vserver services name-service ypbind status
  • vserver services name-service ypbind stop
  • vserver services ndmp generate-password
  • vserver services ndmp kill-all
  • vserver services ndmp kill
  • vserver services ndmp modify
  • vserver services ndmp off
  • vserver services ndmp on
  • vserver services ndmp probe
  • vserver services ndmp show
  • vserver services ndmp status
  • vserver services ndmp extensions modify
  • vserver services ndmp extensions show
  • vserver services ndmp log start
  • vserver services ndmp log stop
  • vserver services ndmp restartable-backup delete
  • vserver services ndmp restartable-backup show
  • vserver services web modify
  • vserver services web show
  • vserver services web access create
  • vserver services web access delete
  • vserver services web access show
  • vserver smtape break
  • vserver snapdiff-rpc-server off
  • vserver snapdiff-rpc-server on
  • vserver snapdiff-rpc-server show
  • vserver vscan disable
  • vserver vscan enable
  • vserver vscan reset
  • vserver vscan show-events
  • vserver vscan show
  • vserver vscan connection-status show-all
  • vserver vscan connection-status show-connected
  • vserver vscan connection-status show-not-connected
  • vserver vscan connection-status show
  • vserver vscan on-access-policy create
  • vserver vscan on-access-policy delete
  • vserver vscan on-access-policy disable
  • vserver vscan on-access-policy enable
  • vserver vscan on-access-policy modify
  • vserver vscan on-access-policy show
  • vserver vscan on-access-policy file-ext-to-exclude add
  • vserver vscan on-access-policy file-ext-to-exclude remove
  • vserver vscan on-access-policy file-ext-to-exclude show
  • vserver vscan on-access-policy file-ext-to-include add
  • vserver vscan on-access-policy file-ext-to-include remove
  • vserver vscan on-access-policy file-ext-to-include show
  • vserver vscan on-access-policy paths-to-exclude add
  • vserver vscan on-access-policy paths-to-exclude remove
  • vserver vscan on-access-policy paths-to-exclude show
  • vserver vscan on-demand-task create
  • vserver vscan on-demand-task delete
  • vserver vscan on-demand-task modify
  • vserver vscan on-demand-task run
  • vserver vscan on-demand-task schedule
  • vserver vscan on-demand-task show
  • vserver vscan on-demand-task unschedule
  • vserver vscan on-demand-task report delete
  • vserver vscan on-demand-task report show
  • vserver vscan scanner-pool apply-policy
  • vserver vscan scanner-pool create
  • vserver vscan scanner-pool delete
  • vserver vscan scanner-pool modify
  • vserver vscan scanner-pool resolve-hostnames
  • vserver vscan scanner-pool show-active
  • vserver vscan scanner-pool show
  • vserver vscan scanner-pool privileged-users add
  • vserver vscan scanner-pool privileged-users remove
  • vserver vscan scanner-pool privileged-users show
  • vserver vscan scanner-pool servers add
  • vserver vscan scanner-pool servers remove
  • vserver vscan scanner-pool servers show
  • Legal notices
  • Provide feedback

netapp change disk container

Creating your file...

Assign ownership of a disk to a system

Availability: This command is available to cluster administrators at the admin privilege level.

Description

The storage disk assign command is used to assign ownership of an unowned disk or array LUN to a specific node. You can also use this command to change the ownership of a disk or an array LUN to another node. You can designate disk ownership by specifying disk names, array LUN names, wildcards, or all (for all disks or array LUNs visible to the node). For disks, you can also set up disk ownership autoassignment. You can also assign disks to a particular pool. You can also assign disks by copying ownership from another disk.

This specifies the disk or array LUN that is to be assigned. Disk names take one of the following forms:

Disks are named in the form <stack-id> . <shelf> . <bay>

Disks on multi-disk carriers are named in the form <stack-id> . <shelf> . <bay> . <lun>

Virtual disks are named in the form <prefix>.<number>, where prefix is the storage array's prefix and number is a unique ascending number.

Disk names take one of the following forms on clusters that are not yet fully upgraded to Data ONTAP 8.3:

Disks that are not attached to a switch are named in the form <node> : <host_adapter> . <loop_ID> . For disks with a LUN, the form is <node> : <host_adapter> . <loop_ID> L <LUN> . For instance, disk number 16 on host adapter 1a on a node named node0a is named node0a:1a.16. The same disk on LUN lun0 is named node0a:1a.16Llun0.

Disks that are attached to a switch are named in the form <node> : <switch_name> : <switch_port> . <loop_ID> . For disks with a LUN, the form is <node> : <switch_name> : <switch_port> . <loop_ID> L <LUN> . For instance, disk number 08 on port 11 of switch fc1 on a node named node0a is named node0a:fc1:11.08. The same disk on LUN lun1 is named node0a:fc1:11.08Llun1.

Before the cluster is upgraded to Data ONTAP 8.3, the same disk can have multiple disk names, depending on how the disk is connected. For example, a disk known to a node named alpha as alpha:1a.19 can be known to a node named beta as beta:0b.37. All names are listed in the output of queries and are equally valid. To determine a disk's unique identity, run a detailed query and look for the disk's universal unique identifier (UUID) or serial number.

A subset of disks or array LUNs can be assigned using the wildcard character (*) in the -disk parameter. Either the -owner , the -sysid , or the -copy-ownership-from parameter must be specified with the -disk parameter. Do not use the -node parameter with the -disk parameter.

This specifies the List of disks to be assigned.

This optional parameter causes assignment of all visible unowned disks or array LUNs to the node specified in the -node parameter. The -node parameter must be specified with the -all parameter. When the -copy-ownership-from parameter is specified with the -node parameter, it assigns disk ownership based on the -copy-ownership-from parameter; otherwise it assigns ownership of the disks based on the -node parameter. Do not use the -owner or the -sysid parameter with the -all parameter.

This optional parameter assigns ownership of a specific type of disk or array LUN (or a set of disks/array LUNs) to a node. The -count parameter must be specified with the -type parameter.

This optional parameter assigns ownership of a number of disks or array LUNs specified in the -count parameter, to a node.

This optional parameter causes all visible disks eligible for autoassignment to be immediately assigned to the node specified in the -node parameter, regardless of the setting of the disk.auto_assign option. Only unowned disks on loops or stacks owned wholly by that system and which have the same pool information will be assigned. The -node parameter must be specified with the -auto parameter. Do not use the -owner , the -sysid , or the -copy-ownership-from parameter with the -auto parameter. When possible, use the -auto parameter rather than the -all parameter to conform to disk ownership best practices. The -auto parameter is ignored for array LUNs.

This optional parameter specifies the pool to which a disk must be assigned. It can take values of Pool0 or Pool1.

This optional parameter specifies the node to which the disk or array LUN has to be assigned.

This optional parameter specifies the serial number (NVRAM ID) of the node to which the disk or array LUN has to be assigned.

This optional parameter specifies the disk name from where the node needs to copy disk ownership information. You can use this parameter for disks to have the same ownership as the provided input disk.

This optional parameter is used to set the checksum type for a disk or an array LUN. The possible values are block , zoned , and advanced_zoned . This operation will fail if the specified disk is incompatible with the specified checksum type. A newly created aggregate with zoned checksum array LUNs is assigned advanced zoned checksum (AZCS) checksum type. AZCS checksum type provides more functionality than the "version 1" zoned checksum type which has been supported in previous Data ONTAP releases. Zoned checksum spare array LUNs added to an existing zoned checksum aggregate continue to be zoned checksum. Zoned checksum spare array LUNs added to an AZCS checksum type aggregate use the AZCS checksum scheme for managing checksums. For some disks (e.g. FCAL, SSD, SAS disks), the checksum type cannot be modified. For more information on modifying the checksum type, refer to the "Physical Storage Management Guide".

This optional parameter forces the assignment of ownership of an already owned disk to a node. This parameter could also be used to assign an array LUN with a redundancy error, for example, if the array LUN is available on only one path. For a disk which is part of a live aggregate, even specification of the -force parameter would not force the assignment, since it would be catastrophic.

This optional parameter is used with either the -auto or the -all parameter. If used with the -auto parameter, all disks which are visible to the node specified in the -node parameter and which are eligible for autoassignment would be assigned to it. If used with the -all parameter, all unowned disks or array LUNs visible to the node would be assigned to it.

This optional parameter assigns the root partition of a root-data/root-data1-data2 partitioned disk. You cannot use this parameter with disks that are part of a storage pool. The default value is false .

This optional parameter assigns the data partition of a root-data partitioned disk. You cannot use this parameter with disks that are part of a storage pool. The default value is false .

This optional parameter assigns the data1 partition of a root-data1-data2 partitioned disk. You cannot use this parameter with disks that are part of a storage pool. The default value is false .

This optional parameter assigns the data2 partition of a root-data1-data2 partitioned disk. You cannot use this parameter with disks that are part of a storage pool. The default value is false .

The following example assigns all unowned disks or array LUNs visible to a node named node1 to itself:

The following example autoassigns all unowned disks (eligible for autoassignment) visible to a node named node1 to itself:

The following two examples show the working of the -force parameter with a spare disk that is already owned by another system:

The following example assigns ownership of the set of unowned disks on <stack> 1 , to a node named node1 :

The following example assigns ownership of unowned disk 1 . 1 . 16 by copying ownership from disk 1 . 1 . 18 :

The following example assigns the root partition of disk 1 . 1 . 16 to node1.

The following example assigns the data partition of root-data partitioned disk 1 . 1 . 16 to node1.

The following example assigns the data1 partition of root-data1-data2 partitioned disk 1 . 1 . 24 to node1.

The following example assigns the data2 partition of root-data1-data2 partitioned disk 1 . 1 . 24 to node1.z33

IMAGES

  1. Solved: How to change container ?

    netapp change disk container

  2. Storage hierarchy of Azure NetApp Files

    netapp change disk container

  3. Solved: How to change container ?

    netapp change disk container

  4. Solved: Disk Replacement

    netapp change disk container

  5. NetApp ADP disk reassign

    netapp change disk container

  6. Solved: Disk Replacement

    netapp change disk container

VIDEO

  1. ONTAP Select 9.9.1

  2. how to change disk (spendal)#plumbing#virelshorts#plumbersworld

  3. How to deploy PostgreSQL with Kubernetes Helm and NetApp Trident

  4. Create CIFS Share & Modify Permissions NetApp Cmode 9.4

  5. System Configuration Backup manage NetApp Cmode 9.4

  6. Configure a Minimal NetApp StorageGrid Object Storage Cluster in VMs

COMMENTS

  1. Solved: How to change container ?

    Create an aggr with 11 drives + 1 spare. and then whatever volumes you want on that. You will also move volumes between the two aggrs if you want. I'd name the volume something different than the aggr name, just for easier mgmt. but it's your system. View solution in original post. 1. Tags: great solution. Reply.

  2. ONTAP 9 assign a disk as a spare

    7,436 Views. If it is a used disk that has partitions already, try to: Assign all partition. storage disk assign -disk <disk id> -owner nodename -force. storage disk assign -disk <disk id> -owner nodename -root true -force. storage disk assign -disk <disk id> -owner nodename -data true -force. Remove foreign aggregate if applicable.

  3. Reassign disks to nodes with System Manager

    Steps. Click Storage > Aggregates & Disks > Disks. In the Disks window, select the Inventory tab. Select the disks that you want to reassign, and then click Assign. In the Warning dialog box, click Continue. In the Assign Disks dialog box, select the node to which you want to reassign the disks. Click Assign.

  4. Solved: NetApp FAS2552 Aggregate and Disk

    9,650 Views. 1. Shared is partitioned disk. Look in documentation for ADP (Advanced Disk Partition). In summary - disk is divided in two partitions and each partition can be used independently as part of aggregate (or pool, but that's another story). 2. Yes, each partition can be part of separate aggregate. 3.

  5. Update disk ownership, change authentication keys, or sanitize ...

    sanitize_disk cryptographically erases all user data from a spare or broken drive by altering the data encryption key. Resets the data AK to the drive-unique MSID value and disables data-at-rest protection. Used when a drive is being repurposed or returned. enum: ["rekey_data_default", "rekey_data_auto_id", "sanitize_disk"]

  6. Data ONTAP 8: How does disk ownership of NetApp ...

    To assign disk ownership of a NetApp Shared Storage: Data ONTAP 8 7-Mode: disk assign <diskname> [-f] [-p ] [- o <ownername>] [-s <sysid>] Normal usage is to assign ownership of unowned disks to local storage system: disk assign 0b.16 disk assign 0b.17 -p 1. To assign disk owned by local storage system back to unowned: disk assign 0b.16 -s ...

  7. How to remove ownership on foreign disks in ...

    Any attempt to remove foreign ownership fails with the following error: Error: command failed: Failed to change the owner of disk"r2cdot1-01:0b.10.0". Reason: Unknown. this node. this node. r2cdot1-01*> disk assign 0b.10. -s unowned Disk assign request failed. r2cdot1-01*> disk assign 0b.10. -s unowned -f Disk assign request failed.

  8. CLI

    lem-netapp-c190::*> storage disk show Usable Disk Container Container Disk Size Shelf Bay Type Type Name Owner ----- ----- ----- --- ----- ----- ----- ----- 1.0.0 894.0GB 0 0 SSD shared aggr0_lem_netapp_c190_02, lem_netapp_c190_01_SSD_1, lem_netapp_c190_02_SSD_1 lem-netapp-c190-02 1.0.1 894.0GB 0 1 SSD shared aggr0_lem_netapp_c190_02, lem_netapp_c190_01_SSD_1, lem_netapp_c190_02_SSD_1 lem ...

  9. Hot-add a drive

    Hot-add a drive - NS224 shelves. 05/23/2024 Contributors. Suggest changes. PDFs. You can add new drives to a powered-on shelf non-disruptively, even during I/O operations. Use the NetApp Knowledge Base article Best practices for adding disks to an existing shelf or cluster.

  10. Setup DR Test SVM (testing-nasprd) :: IT Documentation

    sux-netapp-c190::> cifs create -cifs-server NASPRD -domain BLUEBUNNY.COM -ou "OU=Linux SMB Shares,OU=Computer Accounts,DC=bluebunny,DC=com" -status-admin up -vserver testing-nasprd In order to create an Active Directory machine account for the CIFS server, you must supply the name and password of a Windows account with sufficient privileges to add computers to the "OU=Linux SMB Shares,OU ...

  11. Solved: Re: container-type change

    Converting to and from ADP can only be done on (re)initialization of the system. You would need to remove all data from the system, and reboot it and run option 9 then 9a/9b. Can you post the output of the following just so we can verify. storage disk show storage disk show -partition-owners...

  12. General Availability: Azure NetApp Files backup

    Azure NetApp Files backup expands the data protection capabilities of Azure NetApp Files by providing fully managed backup solution for long-term snapshot archive, recovery, and compliance. ... Deploy and scale containers on managed Kubernetes. ... Azure Disk Storage High-performance, highly durable block storage ...

  13. NetApp AFF A1K, AFF A90, and AFF A70 Unified Storage Systems Built for

    These systems are AFF A1K, AFF A90, and AFF A70, which can turbo-charge enterprise workloads by delivering: Up to 2x better performance with unmatched 40 million IO/s, 1TB/s throughput. Proven 99.9999% data availability. Leading raw-to-effective capacity, including always-on data reduction and 4:1 Storage Efficiency Guarantee.

  14. How to Change Docker's Default Data Directory

    Step 2: Move the Docker's Data Directory. Docker stores all its data in a default directory, including images, containers, volumes, and networks. On most Linux systems, this is typically " /var/lib/docker .". While this works well for initial setups, as the number of projects grows, so does the data. Because of this, the next step is to ...

  15. Get-Volume does not work in container

    Instead, that volume is created inside the Docker VM. If you do docker volume inspect <vol-name>, it'll probably show that the mountpoint is at C:\ProgramData\Docker\volumes. However, if you open up File Explorer and try to navigate to that path, it won't let you.

  16. Monitoring Available Disk Space

    To create the monitor for available disk space: In the navigation menu, click Monitors. Click New Monitor. Select Metric as the monitor type. In the Define the metric section, input system.disk.free and avg by host (Query a). Click Add Query and input system.disk.total and avg by host (Query b). In the formula that appears, replace the default ...

  17. Solved: Re: How to change container ?

    not the SATA01 aggr.... But the new drives, change their ownership to node 1 and create the aggr there. It'll help balance the workload a bit better. but just a suggestion, you can create the new aggr on node 02. again, you cannot use those SAS drives to grow the "SATA01" aggr.

  18. Storage technology explained: File, block and object storage

    Object storage is the new kid on the block, relatively speaking. Unlike file and block storage, it lacks a file system and is based on a "flat" structure with access to objects via their ...

  19. Award Winners and Runners-up from 2024

    There were 2 NetApp units involved in the data migration: one in Bangalore and another in Denver. The transfer was performed using the inbuilt NetApp Snap Mirror feature. It involved migrating data in a follow the sun requirement, and critical data and security was paramount. The allocated WAN bandwidth for transfer was a 1Gb/s controlled by a ...

  20. Persistent storage using LVM Storage

    Specify the channel from which you want to retrieve the OpenShift Container Platform images. 5: Set this field to true to generate the OpenShift Update Service (OSUS) graph image. For more information, see About the OpenShift Update Service. 6: Specify the Operator catalog from which you want to retrieve the OpenShift Container Platform images. 7

  21. Disks window in System Manager

    Command buttons. Assigns or reassigns the ownership of the disks to a node. This button is enabled only if the container type of the selected disks is unassigned, spare, or shared. Erases all the data, and formats the spare disks and array LUNs. Updates the information in the window.

  22. How to unpartition a spare drive in ONTAP 9.6 and later

    NetApp provides no representations or warranties regarding the accuracy or reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein.

  23. Top 33 NetApp Interview Questions and Answers 2024

    NetApp specializes in cloud data services, data storage solutions, and hybrid cloud infrastructures. Research the latest NetApp products like AFF, SolidFire, ONTAP software, and cloud services. Understand how they work and their benefits. Data Management Principles: NetApp solutions are built around efficient data management and storage.

  24. storage disk assign

    storage disk assign. 05/18/2024 Contributors. Suggest changes. PDFs. Assign ownership of a disk to a system.