• Discussions
  • Articles & Resources
  • NetApp A-Team
  • ONTAP Discussions
  • ONTAP, AFF, and FAS

add disk spare to aggregate

  • Subscribe to RSS Feed
  • Mark Topic as New
  • Mark Topic as Read
  • Float this Topic for Current User
  • Printer Friendly Page

norman_vicente

  • Mark as New
  • Report Inappropriate Content

I have 3 aggregate with 3 spares,

Im wondering if i can use the disk spare and assign to one of aggregate for additional capacity, is this possibl? what are the risks?

please see attached.

  • Snapshot and SnapRestore
  • All forum topics
  • Previous Topic

scottgelb

It is possible but could cause a slowdown in performance depending on how many disks are in the aggregate... You need at least 1 spare (2 is better) so adding 1 or 2 drives isn't typical best practice.  We prefer to add a complete raid group at a time to an aggregate when growing it...  Adding less drives can cause those new drives to run hot (higher utilization as they are written to first)... there is a "reallocation" command that can layout volumes but that takes some time and has some limitations (if running dedup you need to be on 8.1 to run realllocate with dedup and no reallocate if running compression).  It really depends on how many drives you have in the aggr now and the layout and how many you plan to add.  Also it looks like you have 500GB in the aggregate now and are going to add 1TB spares... ONTAP supports mixed sizes in an aggr or raid group, but isn't something I like to do... the bigger drive will swap with one of the smaller parity drives so you don't gain anything on the first drive added of the bigger drive size then additional larger drives can be added as data so some diminishing returns on usable along with the performance hit you may have.

If you email the output of "sysconfig -r" and "sysconfig -V" (second command can be determined from the first but easier to see layout of raid groups this way) the community will give several opinions on layout...some may be different but good to see the different opinions and best practices used by others.

appreciated here is the config of the filer

filer001> sysconfig -r Aggregate aggr0 (online, raid_dp) (block checksums)   Plex /aggr0/plex0 (online, normal, active)     RAID group /aggr0/plex0/rg0 (normal)

      RAID Disk Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)       --------- ------  ------------- ---- ---- ---- ----- --------------    --------------       dparity   0b.16   0b    1   0   FC:B   -  ATA   7200 423111/866531584  423889/868126304       parity    0a.32   0a    2   0   FC:A   -  ATA   7200 423111/866531584  423889/868126304       data      0a.19   0a    1   3   FC:A   -  ATA   7200 423111/866531584  423889/868126304       data      0a.33   0a    2   1   FC:A   -  ATA   7200 423111/866531584  423889/868126304       data      0a.18   0a    1   2   FC:A   -  ATA   7200 423111/866531584  423889/868126304       data      0b.34   0b    2   2   FC:B   -  ATA   7200 423111/866531584  423889/868126304       data      0a.42   0a    2   10  FC:A   -  ATA   7200 423111/866531584  423889/868126304       data      0b.27   0b    1   11  FC:B   -  ATA   7200 423111/866531584  423889/868126304       data      0a.43   0a    2   11  FC:A   -  ATA   7200 423111/866531584  423889/868126304       data      0b.28   0b    1   12  FC:B   -  ATA   7200 423111/866531584  423889/868126304       data      0a.44   0a    2   12  FC:A   -  ATA   7200 423111/866531584  423889/868126304

Aggregate aggr1 (online, raid_dp) (block checksums)   Plex /aggr1/plex0 (online, normal, active)     RAID group /aggr1/plex0/rg0 (normal)

      RAID Disk Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)       --------- ------  ------------- ---- ---- ---- ----- --------------    --------------       dparity   0a.35   0a    2   3   FC:A   -  ATA   7200 423111/866531584  423889/868126304       parity    0a.17   0a    1   1   FC:A   -  ATA   7200 423111/866531584  423889/868126304       data      0b.36   0b    2   4   FC:B   -  ATA   7200 423111/866531584  423889/868126304       data      0a.20   0a    1   4   FC:A   -  ATA   7200 423111/866531584  423889/868126304       data      0b.37   0b    2   5   FC:B   -  ATA   7200 423111/866531584  423889/868126304       data      0a.21   0a    1   5   FC:A   -  ATA   7200 423111/866531584  423889/868126304       data      0a.22   0a    1   6   FC:A   -  ATA   7200 423111/866531584  423889/868126304       data      0a.38   0a    2   6   FC:A   -  ATA   7200 423111/866531584  423889/868126304       data      0a.23   0a    1   7   FC:A   -  ATA   7200 423111/866531584  423889/868126304       data      0b.39   0b    2   7   FC:B   -  ATA   7200 423111/866531584  423889/868126304       data      0b.24   0b    1   8   FC:B   -  ATA   7200 423111/866531584  423889/868126304       data      0b.40   0b    2   8   FC:B   -  ATA   7200 423111/866531584  423889/868126304       data      0b.25   0b    1   9   FC:B   -  ATA   7200 423111/866531584  423889/868126304       data      0b.41   0b    2   9   FC:B   -  ATA   7200 423111/866531584  423889/868126304       data      0b.26   0b    1   10  FC:B   -  ATA   7200 423111/866531584  423889/868126304

Aggregate aggr2 (online, raid_dp) (block checksums)   Plex /aggr2/plex0 (online, normal, active)     RAID group /aggr2/plex0/rg0 (normal)

      RAID Disk Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)       --------- ------  ------------- ---- ---- ---- ----- --------------    --------------       dparity   0c.50   0c    3   2   FC:A   -  ATA   7200 847555/1735794176 847827/1736350304       parity    0c.58   0c    3   10  FC:A   -  ATA   7200 847555/1735794176 847827/1736350304       data      0c.57   0c    3   9   FC:A   -  ATA   7200 847555/1735794176 847827/1736350304       data      0c.56   0c    3   8   FC:A   -  ATA   7200 847555/1735794176 847827/1736350304       data      0c.55   0c    3   7   FC:A   -  ATA   7200 847555/1735794176 847827/1736350304       data      0c.53   0c    3   5   FC:A   -  ATA   7200 847555/1735794176 847827/1736350304       data      0c.52   0c    3   4   FC:A   -  ATA   7200 847555/1735794176 847827/1736350304       data      0c.51   0c    3   3   FC:A   -  ATA   7200 847555/1735794176 847827/1736350304       data      0c.59   0c    3   11  FC:A   -  ATA   7200 847555/1735794176 847827/1736350304       data      0c.49   0c    3   1   FC:A   -  ATA   7200 847555/1735794176 847827/1736350304       data      0c.48   0c    3   0   FC:A   -  ATA   7200 847555/1735794176 847827/1736350304       data      0c.54   0c    3   6   FC:A   -  ATA   7200 847555/1735794176 847827/1736350304       data      0c.60   0c    3   12  FC:A   -  ATA   7200 847555/1735794176 847827/1736350304

Spare disks

RAID Disk       Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks) ---------       ------  ------------- ---- ---- ---- ----- --------------    -------------- Spare disks for block or zoned checksum traditional volumes or aggregates spare           0b.29   0b    1   13  FC:B   -  ATA   7200 423111/866531584  423889/868126304 spare           0b.45   0b    2   13  FC:B   -  ATA   7200 423111/866531584  423889/868126304 spare           0c.61   0c    3   13  FC:A   -  ATA   7200 847555/1735794176 847827/1736350304 filer001>

fler001> sysconfig -V

volume aggr0 (1 RAID group):

        group 0: 11 disks

volume aggr1 (1 RAID group):

        group 0: 15 disks

volume aggr2 (1 RAID group):

        group 0: 13 disks

filer001>

Forgot to also ask what ontap version and controller model. Also all 32bit aggrs? I will go over the layout in a bit

Sent from my iPhone 4S

these are 32 bit aggregates, FAS3020 7.3.7 DOT

With 32-bit aggregates this is a reasonable setup although it may have made sense to combine aggr0 and aggr1 together since the same 500GB drives and more spindle I/O in that aggregate. Once created you can’t combine without destroying them so likely not an option. The separate aggr2 with 1TB drives makes sense to have a new aggr with all the same size drives.

For spares, I prefer 2 of each drive type…that way maintenance garage is used (where a failed drive is tested and put back in the spares pool if it passes diagnostics), but on a smaller system going with 1 does make sense. For 1TB you only have one spare so should not use that one. For 500GB you have 2 spares and I’d leave those alone too…although you could use one of those drives for aggr0 or aggr1, but you would have a single disk bottleneck once you add it…which depending on current I/O may affect performance. A perfstat or statit over time (“priv set advanced ; statit -b” then wait a while and “priv set advanced ; statit -enr”) will show current disk utilization and you can interpret what may happen with a single drive add… the old best practice was at least 3 drives at a time and now we follow adding a full raid group at a time. If you need to grow an aggr, it would be best to add the full raid group and not a single drive. I would keep the current layout as is but depends if you can get more disks and how desperate the situation is for space.

If i really need to add spare disk, let say 500GB how much data will be added to the current aggr1? sorry im no NetApp expert.

by the way here is the statit result

filer001*> statit -enr

Hostname: filer001  ID: 0101202867  Memory: 2048 MB   NetApp Release 7.3.7: Thu May  3 03:56:11 PDT 2012     <8O>   Start time: Fri Sep 14 06:09:53 PHT 2012

                       CPU Statistics      369.083347 time (seconds)       100 %      285.440750 system time           77 %        8.451408 rupt time              2 %   (4835644 rupts x 2 usec/rupt)      276.989342 non-rupt system time  75 %      452.725942 idle time            123 %

     309.793971 time in CP            84 %   100 %        6.925683 rupt time in CP                2 %   (3846202 rupts x 2 usec/rupt)

                       Multiprocessor Statistics                           cpu0       cpu1      total sk switches           16045076    4415501   20460577 hard switches          9774735    2591846   12366581 domain switches          30750      18492      49242 CP rupts               3533860     312342    3846202 nonCP rupts             929695      59747     989442 IPI rupts                 1642       2930       4572 grab kahuna                 14          7         21 grab w_xcleaner          58393      29891      88284

grab kahuna usec          2529       3110       5639 grab w_xcleaner usec  16247514   13569582   29817096 CP rupt usec           5930721     994962    6925683 nonCP rupt usec        1366298     159427    1525725 idle                 191120184  261605757  452725942 kahuna                67861337   43114777  110976115 storage               18308425    9829725   28138150 exempt                12950985   16071951   29022937 raid                  30240509   20510934   50751443 target                    5426       4894      10321 netcache                     0          0          0 netcache2                    0          0          0 cifs                     55722      51037     106760 wafl_exempt                  0          0          0 wafl_xcleaner                0          0          0 sm_exempt                12253      13624      25878 cluster                      0          0          0 protocol                     0          0          0 nwk_exclusive                0          0          0 nwk_exempt                   0          0          0 nwk_legacy            41231482   16726254   57957736 nwk_ctx1                     0          0          0 nwk_ctx2                     0          0          0 nwk_ctx3                     0          0          0 nwk_ctx4                     0          0          0

     204.958101 seconds with one or more CPUs active   ( 56%)

     129.425114 seconds with one CPU active            ( 35%)       75.532987 seconds with both CPUs active          ( 20%)

                       Domain Utilization of Shared Domains          0 idle                                 0 kahuna          0 storage                              0 exempt          0 raid                                 0 target          0 netcache                             0 netcache2          0 cifs                                 0 wafl_exempt          0 wafl_xcleaner                        0 sm_exempt          0 cluster                              0 protocol          0 nwk_exclusive                        0 nwk_exempt          0 nwk_legacy                           0 nwk_ctx1          0 nwk_ctx2                             0 nwk_ctx3          0 nwk_ctx4

                       CSMP Domain Switches    From\To       idle     kahuna    storage     exempt       raid     target   netcache  netcache2       cifs wafl_exempt wafl_xcleaner  sm_exempt    cluster   protocol nwk_exclusive nwk_exempt nwk_legacy   nwk_ctx1   nwk_ctx2   nwk_ctx3   nwk_ctx4       idle          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0     kahuna          0          0        598          0       1047        136          0          0       3143          0          0          0          0          0          0          0      15077          0          0          0          0    storage          0        598          0          0       4620          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0     exempt          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0       raid          0       1047       4620          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0     target          0        136          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0   netcache          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0 netcache2          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0       cifs          0       3143          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0 wafl_exempt          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0 wafl_xcleaner          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0 sm_exempt          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0    cluster          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0   protocol          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0 nwk_exclusive          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0 nwk_exempt          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0 nwk_legacy          0      15077          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0   nwk_ctx1          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0   nwk_ctx2          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0   nwk_ctx3          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0   nwk_ctx4          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0

                       Miscellaneous Statistics   12366581 hard context switches           525658 NFS operations        410 CIFS operations                      0 HTTP operations          0 NetCache URLs                        0 streaming packets    7435651 network KB received            7322367 network KB transmitted   11942136 disk KB read                  10947460 disk KB written    6919387 NVRAM KB written                     0 nolog KB written    1691881 WAFL bufs given to clients           0 checksum cache hits    1691542 no checksum - partial buffer         0 FCP operations          0 iSCSI operations

                       WAFL Statistics       4704 name cache hits                    523 name cache misses   42386334 buf hash hits                 12066605 buf hash misses     697938 inode cache hits                     4 inode cache misses    8263392 buf cache hits                  349463 buf cache misses      83295 blocks read                    1865643 blocks read-ahead     213863 chains read-ahead                33087 dummy reads    1606364 blocks speculative read-ahead   2088468 blocks written      11654 stripes written                      0 blocks over-written          0 wafl_timer generated CP              0 snapshot generated CP          0 wafl_avail_bufs generated CP        76 dirty_blk_cnt generated CP          0 full NV-log generated CP             2 back-to-back CP          0 flush generated CP                   0 sync generated CP          0 wafl_avail_vbufs generated CP         0 deferred back-to-back CP          0 container-indirect-pin CP            0 low mbufs generated CP         15 low datavecs generated CP      1103373 non-restart messages      24297 IOWAIT suspends                  10988 next nvlog nearly full msecs      18253 dirty buffer susp msecs              0 nvlog full susp msecs     391458 buffers

                       RAID Statistics     546668 xors                                 0 long dispatches [0]          0 long consumed [0]                    0 long consumed hipri [0]          0 long low priority [0]                0 long high priority [0]          0 long monitor tics [0]                0 long monitor clears [0]          0 long dispatches [1]                  0 long consumed [1]          0 long consumed hipri [1]              0 long low priority [1]          0 long high priority [1]               0 long monitor tics [1]          0 long monitor clears [1]             18 max batch       7872 blocked mode xor                126415 timed mode xor       1406 fast adjustments                   826 slow adjustments          0 avg batch start                      0 avg stripe/msec      12174 tetrises written                     0 master tetrises          0 slave tetrises                  326597 stripes written     219536 partial stripes                 107061 full stripes    2080066 blocks written                  898151 blocks read       1077 1 blocks per stripe size 9         479 2 blocks per stripe size 9        480 3 blocks per stripe size 9         666 4 blocks per stripe size 9        866 5 blocks per stripe size 9        1482 6 blocks per stripe size 9       3050 7 blocks per stripe size 9       11035 8 blocks per stripe size 9      99177 9 blocks per stripe size 9       24090 1 blocks per stripe size 11      22872 2 blocks per stripe size 11      23535 3 blocks per stripe size 11      23579 4 blocks per stripe size 11      22079 5 blocks per stripe size 11      20151 6 blocks per stripe size 11      18496 7 blocks per stripe size 11      15880 8 blocks per stripe size 11      13439 9 blocks per stripe size 11      11257 10 blocks per stripe size 11      7882 11 blocks per stripe size 11       1898 1 blocks per stripe size 13        805 2 blocks per stripe size 13        647 3 blocks per stripe size 13        469 4 blocks per stripe size 13        307 5 blocks per stripe size 13        265 6 blocks per stripe size 13        224 7 blocks per stripe size 13        184 8 blocks per stripe size 13        115 9 blocks per stripe size 13         68 10 blocks per stripe size 13         32 11 blocks per stripe size 13         9 12 blocks per stripe size 13          2 13 blocks per stripe size 13

                       Network Interface Statistics iface    side      bytes    packets multicasts     errors collisions  pkt drops e0a      recv       6966         84          0          0                     0          xmit       2604         62         62          0          0 e0b      recv    7664611      10846          0          0                     0          xmit    1985894       5769         62          0          0 e0c      recv 7606431912    7715430          0          0                     0          xmit 7496113306    7481754         63          0          0 e0d      recv       3968         62          0          0                     0          xmit       2604         62         62          0          0 vh       recv          0          0          0          0                     0          xmit          0          0          0          0          0 Single   recv    7670159      10925       5272          0                     0          xmit    1988072       5828        124          0          0 vif1     recv 7593873116    7709940        127          0                     0          xmit 7506890328    7483923        125          0          0

                       Disk Statistics         ut% is the percent of time the disk was busy.         xfers is the number of data-transfer commands issued.         xfers = ureads + writes + cpreads + greads + gwrites         chain is the average number of 4K blocks per command.         usecs is the average disk round-trip time per 4K block.

disk             ut%  xfers  ureads--chain-usecs writes--chain-usecs cpreads-chain-usecs greads--chain-usecs gwrites-chain-usecs /aggr0/plex0/rg0: 0b.16             14   8685     201   1.18 23996   7953  14.90  1063    531   4.43  1451      0   ....     .      0   ....     . 0a.32             15   8865     191   1.19 53040   8150  14.59  1125    524   4.25  1533      0   ....     .      0   ....     . 0a.19             50  23117   13787   1.01 35692   8190  13.86  2515   1140   4.10  6521      0   ....     .      0   ....     . 0a.33             49  22447   13408   1.02 35952   7985  14.30  2496   1054   4.25  6622      0   ....     .      0   ....     . 0a.18             49  22746   13608   1.02 35173   8022  14.22  2523   1116   4.14  7054      0   ....     .      0   ....     . 0b.34             48  22387   13333   1.02 34592   8007  14.27  2447   1047   3.93  6935      0   ....     .      0   ....     . 0a.42             48  22375   13301   1.03 34217   7968  14.26  2478   1106   4.28  6021      0   ....     .      0   ....     . 0b.27             49  22686   13619   1.02 35421   7967  14.27  2518   1100   4.35  6251      0   ....     .      0   ....     . 0a.43             49  22394   13207   1.02 35615   8007  14.12  2566   1180   4.72  5970      0   ....     .      0   ....     . 0b.28             49  22583   13445   1.02 34425   8000  14.23  2425   1138   4.04  7097      0   ....     .      0   ....     . 0a.44             48  22269   13081   1.02 34675   7935  14.22  2486   1253   4.77  5796      0   ....     .      0   ....     . /aggr1/plex0/rg0: 0a.35              4   2067     194   1.00 28278   1120   4.49  3283    753   6.26  1139      0   ....     .      0   ....     . 0a.17              6   2205     186   1.00 66086   1305   4.13  3402    714   6.25  1407      0   ....     .      0   ....     . 0b.36              3   1356     454   1.02 14931    561   2.49  5256    341   4.21  1310      0   ....     .      0   ....     . 0a.20              2    902      88   1.10 23021    417   3.29  6563    397   4.30  1214      0   ....     .      0   ....     . 0b.37              2    853      98   1.00 24582    369   3.26  6017    386   3.81  1496      0   ....     .      0   ....     . 0a.21              2    840      97   1.04 22950    367   3.28  6217    376   4.06  1353      0   ....     .      0   ....     . 0a.22              2    851      87   1.02 14831    376   3.09  6469    388   3.92  1338      0   ....     .      0   ....     . 0a.38              2    825      98   1.00 19224    382   3.37  5568    345   4.47  1086      0   ....     .      0   ....     . 0a.23              2    881     110   1.06 20017    382   2.88  6701    389   4.23  1268      0   ....     .      0   ....     . 0b.39              2    841      92   1.04 20104    380   2.83  7223    369   4.12  1478      0   ....     .      0   ....     . 0b.24              3    831      90   1.00 22011    369   3.20  6405    372   3.96  1322      0   ....     .      0   ....     . 0b.40              2    846      80   1.14 21110    381   3.20  6303    385   4.49  1231      0   ....     .      0   ....     . 0b.25              2    787      85   1.00 21788    335   3.29  6798    367   3.86  1571      0   ....     .      0   ....     . 0b.41              2    863      91   1.04 25411    381   3.28  6665    391   4.13  1353      0   ....     .      0   ....     . 0b.26              2    850      94   1.04 25051    380   3.01  7076    376   4.46  1073      0   ....     .      0   ....     . /aggr2/plex0/rg0: 0c.50             31  26838     188   1.00 32287  16252  12.51  1221  10398   7.53   869      0   ....     .      0   ....     . 0c.58             33  27042     186   1.00 75973  16469  12.37  1264  10387   7.52  1010      0   ....     .      0   ....     . 0c.57             84  48507   26439   6.39  6247  11314   8.23  4289  10754   6.50  3552      0   ....     .      0   ....     . 0c.56             83  48576   26433   6.43  6231  11342   8.34  4297  10801   6.46  3559      0   ....     .      0   ....     . 0c.55             84  49043   26843   6.37  6276  11402   8.43  4288  10798   6.47  3462      0   ....     .      0   ....     . 0c.53             83  48260   26330   6.43  6173  11211   8.49  4209  10719   6.52  3585      0   ....     .      0   ....     . 0c.52             83  48871   26607   6.38  6192  11558   8.34  4237  10706   6.44  3661      0   ....     .      0   ....     . 0c.51             83  48663   26735   6.39  6166  11150   8.43  4244  10778   6.53  3545      0   ....     .      0   ....     . 0c.59             84  48707   26626   6.42  6191  11290   8.30  4372  10791   6.45  3643      0   ....     .      0   ....     . 0c.49             83  48373   26429   6.40  6145  11214   8.49  4188  10730   6.48  3585      0   ....     .      0   ....     . 0c.48             83  48232   26037   6.41  6188  11477   8.41  4273  10718   6.38  3663      0   ....     .      0   ....     . 0c.54             82  48038   25990   6.28  6357  11208   8.34  4218  10840   6.49  3484      0   ....     .      0   ....     . 0c.60             84  48801   26611   6.34  6303  11297   8.39  4379  10893   6.48  3668      0   ....     .      0   ....     .

Aggregate statistics: Minimum            2    787      80                 335                 341                   0                   0 Mean              38  21135   10630                6483                4021                   0                   0 Maximum           84  49043   26843               16469               10893                   0                   0

Spares and other disks: 0c.61              0      0       0   ....     .      0   ....     .      0   ....     .      0   ....     .      0   ....     .

Spares and other disks: 0b.29              0      0       0   ....     .      0   ....     .      0   ....     .      0   ....     .      0   ....     .

Spares and other disks: 0b.45              0      0       0   ....     .      0   ....     .      0   ....     .      0   ....     .      0   ....     .

                       FCP Statistics          0 FCP Bytes recv                       0 FCP Bytes sent          0 FCP ops

                       iSCSI Statistics          0 iSCSI Bytes recv                     0 iSCSI Bytes xmit          0 iSCSI ops

                       Interrupt Statistics     738305 Clock (IRQ 0)                       50 Uart (IRQ 4)      84945 PCA Intr (IRQ 11)              3224863 Gigabit Ethernet (IRQ 48)        126 Gigabit Ethernet (IRQ 49)       558064 FCAL (IRQ 52)       2394 Gigabit Ethernet (IRQ 97)         9790 Gigabit Ethernet (IRQ 98)     135882 FCAL (IRQ 101)                   76653 FCAL (IRQ 102)          0 RTC                               4572 IPI    4835644 total

                       NVRAM Statistics    8771809 total dma transfer KB          6856088 wafl write req data KB     222006 dma transactions               1129392 dma destriptors    5243328 waitdone preempts               956614 waitdone delays          0 transactions not queued         222006 transactions queued     222006 transactions done                39766 total waittime (MS)     269491 completion wakeups              257624 nvdma completion wakeups     140203 nvdma completion waitdone      6920256 total nvlog KB          0 nvlog shadow header array full         0 channel1 dma transfer KB          0 channel1 dma transactions            0 channel1 dma descriptors

                       NFS Detail Statistics

Server rpc: TCP: calls       badcalls    nullrecv    badlen      xdrcall 525693      0           0           0           0

UDP: calls       badcalls    nullrecv    badlen      xdrcall 0           0           0           0           0

IPv4: calls       badcalls    nullrecv    badlen      xdrcall 525693      0           0           0           0

IPv6: calls       badcalls    nullrecv    badlen      xdrcall 0           0           0           0           0

Server nfs: calls       badcalls 525659      0

Server nfs V2: (0 calls) null       getattr    setattr    root       lookup     readlink   read 0 0%       0 0%       0 0%       0 0%       0 0%       0 0%       0 0% wrcache    write      create     remove     rename     link       symlink 0 0%       0 0%       0 0%       0 0%       0 0%       0 0%       0 0% mkdir      rmdir      readdir    statfs 0 0%       0 0%       0 0%       0 0%

Read request stats (version 2) 0-511      512-1023   1K-2047    2K-4095    4K-8191    8K-16383   16K-32767  32K-65535  64K-131071 > 131071 0          0          0          0          0          0          0          0          0          0 Write request stats (version 2) 0-511      512-1023   1K-2047    2K-4095    4K-8191    8K-16383   16K-32767  32K-65535  64K-131071 > 131071 49         76         29639      71491      130643     0          0          0          0          0

Server nfs V3: (525659 calls) null       getattr    setattr    lookup     access     readlink   read 0 0%       50168 10%  145 0%     2509 0%    32848 6%   0 0%       218476 42% write      create     mkdir      symlink    mknod      remove     rmdir 221222 42% 4 0%       0 0%       0 0%       0 0%       0 0%       0 0% rename     link       readdir    readdir+   fsstat     fsinfo     pathconf 0 0%       0 0%       0 0%       0 0%       287 0%     0 0%       0 0% commit 0 0%

Read request stats (version 3) 0-511      512-1023   1K-2047    2K-4095    4K-8191    8K-16383   16K-32767  32K-65535  64K-131071 > 131071 11635      1638       18372      11719      51868171   3294236    4712579    2537897507 2286       0 Write request stats (version 3) 0-511      512-1023   1K-2047    2K-4095    4K-8191    8K-16383   16K-32767  32K-65535  64K-131071 > 131071 331807     5005226    36121655   17556380   7282066    9080609    52320807   1264075760 5054       0

Misaligned Read request stats BIN-0    BIN-1    BIN-2    BIN-3    BIN-4    BIN-5    BIN-6    BIN-7 2597516934 0        0        0        0        0        0        0 Misaligned Write request stats BIN-0    BIN-1    BIN-2    BIN-3    BIN-4    BIN-5    BIN-6    BIN-7 1289760205 204268   228737   206016   206647   203642   209687   208103

NFS V2 non-blocking request statistics: null       getattr    setattr    root       lookup     readlink   read 0 0%       0 0%       0 0%       0 0%       0 0%       0 0%       0 0% wrcache    write      create     remove     rename     link       symlink 0 0%       0 0%       0 0%       0 0%       0 0%       0 0%       0 0% mkdir      rmdir      readdir    statfs 0 0%       0 0%       0 0%       0 0%

NFS V3 non-blocking request statistics: null       getattr    setattr    lookup     access     readlink   read 0 0%       0 0%       0 0%       0 0%       0 0%       0 0%       0 0% write      create     mkdir      symlink    mknod      remove     rmdir 0 0%       0 0%       0 0%       0 0%       0 0%       0 0%       0 0% rename     link       readdir    readdir+   fsstat     fsinfo     pathconf 0 0%       0 0%       0 0%       0 0%       0 0%       0 0%       0 0%

NFS reply cache statistics: TCP: InProg hits     Misses          Cache hits      False hits 0               221371          21              2 UDP: In progress     Misses          Cache hits      False hits 0               0               0               0 filer001*>

About 360GB if you have 5% aggr reserve.

Your disk utilization is really low. At least during this sample.

how I can see that the disk utilization is really low? from the statit output.

If i need to add this this disk "spare           0b.45   0b    2   13  FC:B   -  ATA   7200 423111/866531584  423889/868126304"

the command should be this one right?

aggr add aggr1 -d 0b.45

Correct. Not ideal to add 1 disk but that will add it.

VKALVEMULA

i would say when you add a single disk to an existing aggr.. do it over weekend or when you a huge change window.. so that you have enough time when you do reallocation...

if i would be in your place.. in this case i usually migrate volumes one by one which are eating up space in aggr1 to a new aggr or other aggr which are in low usage.

think about in this way.. if you are adding a 300gb or 450gb disk to aggr1.. think in how many days you will be filling up that 300gb space ( may be sooner. and again you need to add few more disks..) so as a best practice if you are adding disks to existing aggr then add them in a pile if not create a new aggr with 'n' disks so that you wont be hitting any perf issues in future...

thanks vijay, we dont push this activity instead migrated the data to another volume.

kkevin

I am fairly certain at this point you are running out of disk I/O capacity on aggr1 and aggr2. (need more spindles)

this would be affecting you in increased latency which I believe you are experiencing. 

with that said, I agree with Scott and Vijay, you can add the disk, just reallocate at the volume level afterwards.

thanks all, we dont push this activity instead migrated the data to another volume.

netapp assign spare disk to aggregate

The Future of Storage, Built for the AI Era

We’ve been working on something BIG… Tune in LIVE next Tuesday, May 14th at 9AM PT and say hello to the next generation of intelligent data infrastructure.

Silver Stevie Award for 2024

NetApp Support Site Wins Silver Stevie® Award

NetApp has been presented with a Silver Stevie® Award in the Award for Innovation in Customer Service - Computer Industries category in the 18th annual Stevie Awards for Sales & Customer Service .

  • Tech ONTAP Blogs
  • Tech ONTAP Podcast Episodes
  • Digital Support
  • ONTAP Hardware
  • Network and Storage Protocols
  • Data Backup and Recovery
  • SnapManager to SnapCenter Transition
  • Active IQ and AutoSupport Discussions
  • Active IQ Unified Manager Discussions
  • Active IQ and AutoSupport Docs and Resources
  • VMware Solutions Discussions
  • OpenStack Discussions
  • Microsoft Virtualization Discussions
  • Additional Virtualization Discussions
  • Virtualization Environments
  • BlueXP Services
  • Cloud Volumes ONTAP
  • Cloud Insights
  • EF & E-Series, SANtricity, and Related Plug-ins
  • FlexPod Discussions
  • Object Storage
  • SolidFire and HCI
  • Simulator Discussions
  • Software Development Kit (SDK) and API Discussions
  • Python Discussions
  • SolidFire PowerShell Discussions
  • ONTAP Rest API Discussions
  • NetApp Learning Services Discussions
  • General Discussion
  • Community Related Discussions
  • NetApp System Change Calendar

A-Team Logo

  • Latest Topics
  • Unanswered Topics
  • Start a Discussion
  • System Manager Classic docs
  • Create an aggregate
  • Decide where to provision the new volume
  • Create a new SVM with an NFS volume and export
  • Open the export policy of the SVM root volume (Create a new NFS-enabled SVM)
  • Configure LDAP (Create a new NFS-enabled SVM)
  • Verify NFS access from a UNIX administration host
  • Configure and verifying NFS client access (Create a new NFS-enabled SVM)
  • Add NFS access to an existing SVM
  • Open the export policy of the SVM root volume(Configure NFS access to an existing SVM)
  • Configure LDAP (Configure NFS access to an existing SVM )
  • Configure and verify NFS client access (Configure NFS access to an existing SVM)
  • Create and configure a volume
  • Create an export policy for the volume
  • Configure and verify NFS client access (Adding an NFS volume to an NFS-enabled SVM)
  • Verify that the configuration is supported
  • Complete the NFS client configuration worksheet
  • Add the storage cluster to VSC
  • Configure your network for best performance
  • Configure host ports and vSwitches
  • Configure the ESXi host best practice settings
  • Create a new NFS-enabled SVM
  • Verify that NFS is enabled on an existing SVM
  • Provision a datastore and create its containing volume
  • Verify NFS access from an ESXi host
  • Deploy the NFS Plug-in for VMware VAAI
  • Mount datastores on hosts
  • Create a basic SVM
  • Add CIFS and NFS access to an existing SVM
  • Open the export policy of the SVM root volume (Creating a new NFS-enabled SVM)
  • Map the SMB server on the DNS server
  • Map UNIX and Windows user names
  • Create a share and setting its permissions
  • Verify SMB client access
  • Configure and verify CIFS and NFS client access
  • Create a new SVM with a CIFS volume and share
  • Configure and verify CIFS client access
  • Add CIFS access to an existing SVM
  • Create a share and set its permissions
  • Verify that the FC configuration is supported
  • Complete the FC configuration worksheet
  • Install Virtual Storage Console
  • Add the storage cluster or SVM to VSC for VMware vSphere
  • Update the HBA driver, firmware, and BIOS
  • Verify that the FC service is running on an existing SVM
  • Configure FC on an existing SVM
  • Create a new SVM
  • Zone the FC switches by the host and LIF WWPNs
  • Provision a datastore and create its containing LUN and volume
  • Verify that the host can write to and read from the LUN
  • Install the HBA utility from the HBA vendor
  • Install Linux Unified Host Utilities and optimizing I/O performance
  • Record the WWPN for each host FC port
  • Configure DM-Multipath
  • Create a LUN
  • Discover new SCSI devices (LUNs) and multipath devices
  • Configure logical volumes on multipath devices and create a file system
  • Verify that the host can write to and read from a multipath device
  • Install Windows Unified Host Utilities
  • Discover new disks
  • Initialize and formatting the LUN
  • Verify that the iSCSI configuration is supported
  • Complete the iSCSI configuration worksheet
  • Configure host iSCSI ports and vSwitches
  • Enable the iSCSI software adapter
  • Bind iSCSI ports to the iSCSI software adapter
  • Verify that the iSCSI service is running on an existing SVM
  • Configure iSCSI on an existing SVM
  • Test iSCSI paths from the host to the storage cluster
  • Provision a datastore and creating its containing LUN and volume
  • Record the iSCSI node name
  • Set the iSCSI replacement timeout value
  • Start the iSCSI service
  • Start the iSCSI sessions with the target
  • Configure logical volumes on multipath devices and creating a file system
  • Record the iSCSI initiator node name
  • Start iSCSI sessions with the target
  • Prerequisites for cluster peering
  • Prepare for cluster peering
  • Create intercluster LIFs(Beginning with ONTAP 9.3)
  • Create a cluster peer relationship (Beginning with ONTAP 9.3)
  • Create SVM peer relationship
  • Create intercluster interfaces on all nodes (ONTAP 9.2 or earlier)
  • Create a cluster peer relationship (ONTAP 9.2 or earlier)
  • Verify the status of the source volume
  • Break the SnapMirror relationship
  • Verify the destination volume status
  • Configure the destination volume for data access
  • Resynchronize the source volume
  • Update the source volume
  • Reactivate the source volume
  • Verify the cluster peer relationship and SVM peer relationship
  • Create the SnapMirror relationship (Beginning with ONTAP 9.3)
  • Create the SnapMirror relationship (ONTAP 9.2 or earlier)
  • Set up the destination SVM for data access
  • Monitor the status of SnapMirror data transfers
  • Verify cluster peer relationship and SVM peer relationship
  • Create a SnapVault relationship (Beginning with ONTAP 9.3)
  • Create the SnapVault relationship (ONTAP 9.2 or earlier)
  • Monitor the SnapVault relationship
  • Identify the SnapVault backup destination volume
  • Restore data from a SnapVault backup
  • Verify the restore operation
  • Verify the planned configuration
  • Gather the required network information
  • Add or replace switches
  • Add node-locked licenses
  • Verify the health of the system
  • Back up the cluster configuration
  • Generate an AutoSupport message about starting expansion
  • Install the controllers
  • Configure node-management LIFs
  • Upgrade or downgrade the nodes
  • Ensure hardware-level HA is enabled
  • Add nodes to a cluster using System Manager
  • Join nodes to the cluster using the CLI
  • Configure the node details in System Manager
  • Configure AutoSupport on the new nodes
  • Configure the Service Processor network
  • Validate the configuration of the expanded cluster
  • Generate an AutoSupport message about completing expansion
  • Update LUN paths for the new nodes
  • Plan the method and timing of a volume move
  • Move a volume using System Manager
  • Verify LUN reporting nodes after moving a volume
  • Update LUN reporting nodes after moving a volume
  • Update NDMP backup after moving a volume
  • Verify that SNMP is enabled
  • Add an SNMP community
  • Add an SNMPv3 security user
  • Add an SNMP traphost
  • Test SNMP traps
  • Test SNMP polling
  • Understand System Manager
  • Create a cluster
  • Set up a network
  • Set up a support page
  • Review storage recommendations
  • Create an SVM
  • Access a cluster by using the ONTAP System Manager browser-based graphic interface
  • Configure System Manager options
  • View ONTAP System Manager log files
  • Set up the cluster
  • Add licenses
  • Setting the time zone for a cluster
  • Monitor HA pairs
  • Set up the network
  • Assigning disks to nodes
  • Zeroing spare disks
  • Provisioning storage through aggregates
  • Configure CIFS and NFS protocols on SVMs
  • Configure iSCSI protocol on SVMs
  • Configure FC protocol and FCoE protocol on SVMs
  • Configure NVMe protocol on SVMs
  • Delegating administration to SVM administrators
  • Create FlexVol volumes
  • Create SnapLock volumes
  • Set up SAML authentication
  • Set up peering
  • Dashboard window
  • MetroCluster switchover and switchback workflow
  • Preparing for switchover and switchback operations
  • Performing a negotiated switchover
  • Performing a unplanned switchover
  • Performing a switchback
  • MetroCluster Switchover and Switchback Operations window
  • Provisioning a basic template
  • Storage service definitions
  • Add Microsoft SQL Server over SAN to System Manager
  • Application provisioning settings
  • Edit an application
  • Delete an application
  • Applications window
  • Configuration update
  • Service Processors
  • Cluster peers
  • High availability
  • Cluster Expansion
  • Update clusters in a non MetroCluster configuration
  • Update clusters in a MetroCluster configuration
  • Obtaining ONTAP software images
  • Update single-node clusters
  • Update a cluster nondisruptively
  • Cluster Update window
  • Date and time settings of a cluster
  • Manage the network
  • Broadcast domains
  • Network interfaces
  • Ethernet ports
  • FC/FCoE and NVMe adapters
  • Edit aggregates
  • Delete aggregates
  • Change the RAID configuration when creating an aggregate
  • Provisioning cache by adding SSDs
  • Add capacity disks
  • Change the RAID group when adding capacity disks
  • Moving FlexVol volumes
  • Mirroring aggregates
  • View aggregate information
  • Installing a CA certificate if you use StorageGRID
  • How you can use effective ONTAP disk type for mixing HDDs
  • What compatible spare disks are
  • How System Manager works with hot spares
  • Rules for displaying disk types and disk RPM
  • Storage recommendations for creating aggregates
  • Storage Tiers window
  • Configure and managing cloud tiers
  • Storage pools
  • Reassigning disks to nodes
  • View disk information
  • How ONTAP reports disk types
  • Determine when it is safe to remove a multi-disk carrier
  • Disks window
  • Hardware Cache
  • System alerts
  • AutoSupport notifications
  • Flash Pool statistics
  • Monitor SVMs
  • Edit SVM settings
  • Delete SVMs
  • Manage SVMs
  • Tracing file access to diagnose access errors on SVMs
  • About ONTAP name service switch configuration
  • Storage Virtual Machines window
  • Trace File Access window
  • Edit volume properties
  • Edit data protection volumes
  • Delete volumes
  • Create FlexClone volumes
  • Create FlexClone files
  • Splitting a FlexClone volume from its parent volume
  • View the FlexClone volume hierarchy
  • Change the status of a volume
  • View the list of saved Snapshot copies
  • Create Snapshot copies outside a defined schedule
  • Setting the Snapshot copy reserve
  • Hiding the Snapshot copy directory
  • Scheduling automatic creation of Snapshot copies
  • Restore a volume from a Snapshot copy
  • Extend the expiry date of Snapshot copies
  • Rename Snapshot copies
  • Delete Snapshot copies
  • Resize volumes
  • Enable storage efficiency on a volume
  • Change the deduplication schedule
  • Run deduplication operations
  • Moving FlexVol volumes between aggregates or nodes
  • Assigning volumes to Storage QoS
  • Create a mirror relationship from a source SVM
  • Create a vault relationship from a source SVM
  • Create a mirror and vault relationship from a source SVM
  • Create an NFS datastore for VMware
  • Change the tiering policy of a volume
  • Create FlexGroup volumes
  • View FlexGroup volume information
  • Edit FlexGroup volumes
  • Specifying advanced options for a FlexGroup volume
  • Resize FlexGroup volumes
  • Change the status of a FlexGroup volume
  • Delete FlexGroup volumes
  • Create FlexCache volumes
  • View FlexCache volume information
  • Edit FlexCache volumes
  • Specifying advanced options for a FlexCache volume
  • Resize FlexCache volumes
  • Change the status of a FlexCache volume
  • Delete FlexCache volumes
  • How volume guarantees work for FlexVol volumes
  • Using space reservations with FlexVol volumes
  • Options for resizing volumes
  • Volumes window
  • Junction Path
  • Create FC SAN optimized LUNs
  • Application-specific LUN settings
  • Create LUNs
  • Delete LUNs
  • Manage initiator groups
  • Create portsets
  • Cloning LUNs
  • Bringing LUNs online
  • Take LUNs offline
  • Moving LUNs
  • Assigning LUNs to storage QoS
  • Edit initiator groups
  • Edit initiators
  • View LUN information
  • View initiator groups
  • Guidelines for working with FlexVol volumes that contain LUNs
  • Understand space reservations for LUNs
  • Guidelines for using LUN multiprotocol type
  • LUNs window
  • Add home directory paths
  • Reset CIFS domain controllers
  • Update the CIFS group policy configuration
  • Set up BranchCache
  • Add preferred domain controllers
  • View CIFS domain information
  • CIFS window
  • NFS protocol
  • Create an NVMe namespace
  • Edit an NVMe namespace
  • Cloning an NVMe namespace
  • Start and stopping the NVMe service
  • What an NVMe subsystem is
  • iSCSI protocol
  • FC/FCoE protocol
  • Export policies
  • Efficiency policies
  • Rules for assigning storage objects to policy groups
  • NIS services
  • LDAP client services
  • LDAP configuration services
  • Kerberos realm services
  • DNS/DDNS Services
  • Manage local Windows groups
  • Name mapping
  • Manage mirror relationships
  • Break SnapMirror relationships
  • Reverse resynchronizing mirror relationships
  • Abort a mirror transfer
  • Restore a volume in a mirror relationship
  • Manage vault relationships
  • Abort a Snapshot copy transfer
  • Restore a volume in a vault relationship
  • Manage mirror and vault relationships
  • Abort mirror and vault relationships
  • Restore a volume in a mirror and vault relationship
  • Protection window
  • SVM relationships
  • Manage protection policies
  • Manage Snapshot policies
  • Manage schedules
  • Legal Notices

Reassign disks to nodes with System Manager - ONTAP 9.7 and earlier

netapp-lenida

  • Request doc changes
  • Edit this page
  • Learn how to contribute

netapp assign spare disk to aggregate

Creating your file...

You can use ONTAP System Manager classic (available in ONTAP 9.7 and earlier) to reassign the ownership of spare disks from one node to another node to increase the capacity of an aggregate or storage pool.

You can reassign disks if the following conditions are true:

The container type of the selected disks must be “spare” or “shared”.

The disks must be connected to nodes in an HA configuration.

The disks must be visible to the node.

You cannot reassign a disk if the following conditions are true:

The container type of the selected disk is “shared”, and the data partition is not spare.

The disk is associated with a storage pool.

You cannot reassign the data partition of shared disks if storage failover is not enabled on the nodes that are associated with the shared disks.

For partition disks, you can reassign only the data partition of the disks.

For MetroCluster configurations, you cannot use System Manager to reassign disks.

You must use the command-line interface to reassign disks for MetroCluster configurations.

Click Storage > Aggregates & Disks > Disks .

In the Disks window, select the Inventory tab.

Select the disks that you want to reassign, and then click Assign .

In the Warning dialog box, click Continue .

In the Assign Disks dialog box, select the node to which you want to reassign the disks.

Click Assign .

Remove disk ownership using the ONTAP CLI (ONTAP 9.3 and later)

Assign disks automatically using the ONTAP CLI (ONTAP 9.3 and later)

Manually assign disks using the ONTAP CLI (ONTAP 9.3 and later)

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Expanding NetApp volume via spare disks

I am somewhat new to NetApp, so please bear with me. I have a NetApp appliance that I was told has spare disks. How do I:

  • Determine how many spare disks I have available? I have tried "disk show", but it doesn't have a column for "spare". All disks listed are a member of "Pool0". It doesn't show which disks are members of an aggregate either.
  • Add a spare disk to an aggregate. (If this is what needs to be done in order to give an aggregate more free space)

I did figure out how to grow a volume with aggregate free space, so I don't need any assistance there.

Apologies if I am going about this the wrong way or if I used terminology incorrectly.

Basil's user avatar

  • sysconfig -r will also show you spares. –  Sobrique Sep 17, 2014 at 15:20

netapp> aggr status -s To view spares disks in the system

netapp> aggr status -f To view failed disks in the system

netapp> aggr add aggr0 xx.yy To add disk xx.yy to aggregate0 - look for output the command aggr status -s

HBruijn's user avatar

  • When I do "aggr status -s" I see "Spare disks for block checksum". Does that indicate that these disks are in use in any way? (I have 13 spare disks currently) –  cat pants Sep 18, 2014 at 19:10
  • No, it means that those spares can be used on aggregates made of disks using block checksum. Generally, that's all of them. –  Basil Oct 11, 2014 at 16:26

You must log in to answer this question.

Not the answer you're looking for browse other questions tagged storage netapp ..

  • The Overflow Blog
  • Reshaping the future of API platforms
  • Featured on Meta
  • Our Partnership with OpenAI
  • Imgur image URL migration: Coming soon to a Stack Exchange site near you!

Hot Network Questions

  • If the Earth stopped spinning, what's the ideal point for it to stop to ensure the most people survive?
  • Has there ever been a transfer of occupants from one aircraft to another while airborne?
  • Can individual rings of Saturn be considered satellites?
  • What Does the 'Cosɸ' Rating on a DC Relay Indicate?
  • Reference or proof of a theorem of L. Fejér on summability of Fourier series
  • Looked at a different rolling (3d6, 3d6, average) for D&D characters in AnyDice and the result didn't come out as expected. What am I missing?
  • What residential appliance has a NEMA 5-20 plug?
  • Why is the empty set described as "unique" when it is a subset of every set?
  • Is updating a macro value in Xcode preprocessors marcos violating open closed principle?
  • *Trivial* near-repdigit perfect powers
  • What is "the sin that no Christian need pardon"?
  • Can a wizard escape a cage by casting mirror image?
  • code format and steps web scraping using beautiful soup
  • Running two dryers same circuit, controlled by switch
  • Why is there no established prayer for having (or raising) children?
  • Using Mars inner moon Phobos as a brake
  • How does Cloak of Displacement interact with mounted combat?
  • Does Windows 10's portability limit OS features?
  • Can I use two prepositions with the same noun when one takes the dative and the other the accusative?
  • Can I replace max function with mathematical expression?
  • On the definition of stably almost complex manifold
  • Where did Lagrange prove the Four Squares Theorem?
  • In “As an organization we…” is WE considered a personal pronoun?
  • How does Russia exactly define Russian territory in its state policy?

netapp assign spare disk to aggregate

  • Skip to content
  • Skip to search
  • Skip to footer

Cisco UCS Director Task Library Reference, Release 6.9

Bias-free language.

The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.

  • ACI MSC - Application Profile Operations
  • ACI MSC - Bridge Domain Operations
  • ACI MSC - Contract Operations
  • ACI MSC - EPG Operations
  • ACI MSC - External EPG Operations
  • ACI MSC - Filter Operations
  • ACI MSC - Import Operations
  • ACI MSC - Schema Operations
  • ACI MSC - Service Graph Operations
  • ACI MSC - Site Operations
  • ACI MSC - Subnet to EPG on Site Operations
  • ACI MSC - Template Operations
  • ACI MSC - Template to Site Operations
  • ACI MSC - Tenant Operations
  • ACI MSC - User Operations
  • ACI MSC - VRF Operations
  • API Usecases
  • APIC - Access Policies - Attachable Access Entity Profile Operations
  • APIC - Access Policies - Interface Policies Operations
  • APIC - Access Policies - Interface Profiles Operations
  • APIC - Access Policies - Pool Operations
  • APIC - Application Profile Operations
  • APIC - Bridge Domain Operations
  • APIC - Container Task Operations
  • APIC - Contract Operations
  • APIC - EPG Operations
  • APIC - External Bridged Network Operations
  • APIC - External Network Operations
  • APIC - External Routed Network Operations
  • APIC - Filter Operations
  • APIC - L4-L7 Services - Device Operations
  • APIC - L4-L7 Services - Device Parameters Operations
  • APIC - L4-L7 Services - Function Node Operations
  • APIC - L4-L7 Services - Function Profile Operations
  • APIC - L4-L7 Services - Router Configuration Operations
  • APIC - L4-L7 Services - Service Graph Template Operations
  • APIC - Physical and External Domain Operations
  • APIC - Security Management Operations
  • APIC - Tenant Operations
  • APIC - Tenant Policies Operations
  • APIC - VMM Domain Operations
  • APIC - VRF Operations
  • APIC Container
  • Amazon VM Tasks
  • Bare Metal Tasks
  • Brocade Network Tasks
  • Business Process Tasks
  • Cisco C880 M4 Tasks
  • Cisco Fabric Tasks
  • Cisco Network Tasks
  • Cisco Security Tasks
  • Cisco UCS Tasks
  • Compound Tasks
  • Context Mapper Tasks
  • Custom - EMC VNX Tasks
  • Custom - VMware Host Tasks
  • Custom - Virtual SAN Tasks
  • Custom Approval Tasks
  • Custom tasks
  • Custom VNX Tasks
  • EMC Isilon Tasks
  • EMC RecoverPoint Tasks
  • EMC Unity - Consistency Group Operations
  • EMC Unity - File System Operations
  • EMC Unity - Host IP Port Operations
  • EMC Unity - Host Initiator Operations
  • EMC Unity - Host Operations
  • EMC Unity - LUN Operations
  • EMC Unity - NAS Server Operations
  • EMC Unity - NFS Share Operations
  • EMC Unity - SMB Share Operations
  • EMC Unity - Snapshot Schedule Operations
  • EMC Unity - Storage Pool Operations
  • EMC Unity - Tree Quota Operations
  • EMC Unity - User Quota Operations
  • EMC Unity - iSCSI Portal Operations
  • EMC Unity - iSCSI Route Operations
  • EMC VMAX Tasks
  • EMC VNX Tasks
  • EMC VNXe Tasks
  • EMC VPLEX Tasks
  • EMC XtremIO Tasks
  • F5 Big IP Tasks
  • Firmware Management Tasks
  • General Task
  • General Tasks
  • Generic Storage Tasks
  • Generic VM Tasks
  • HP OA Tasks
  • HP Server Tasks
  • Hadoop Container Tasks
  • Hyper-V Policy Tasks
  • HyperFlex Tasks
  • HyperV Host Tasks
  • HyperV Network Tasks
  • HyperV Tasks
  • HyperV VM Tasks
  • IBM Storwize DataStore Tasks
  • IBM Storwize FC Consistency Group Tasks
  • IBM Storwize FC Mapping Tasks
  • IBM Storwize FileSet Tasks
  • IBM Storwize FileShare Tasks
  • IBM Storwize FileSystems Tasks
  • IBM Storwize FlashCopy Tasks
  • IBM Storwize Host Tasks
  • IBM Storwize Mdisk Tasks
  • IBM Storwize Pool Tasks
  • IBM Storwize RC Consistency Group Tasks
  • IBM Storwize RemoteCopy Tasks
  • IBM Storwize Snapshot Rule Tasks
  • IBM Storwize Snapshot Tasks
  • IBM Storwize Tasks
  • IBM Storwize Volume Tasks
  • L4L7 Services

NetApp ONTAP Tasks

  • NetApp OnCommand Tasks
  • NetApp Tasks
  • NetApp VSC Tasks
  • NetApp clustered Data ONTAP Tasks
  • NetApp clustered SVM Tasks
  • Network Services Tasks
  • Obsoleted Tasks
  • Platform Tasks
  • Policy Tasks
  • Policy and Profile Tasks
  • Procedural Tasks
  • RHEV KVM Tasks
  • RHEV KVM VM Tasks
  • Rack Server Tasks
  • Resource Group Tasks
  • Resource Groups
  • SCVMM 2008 Network Tasks
  • SCVMM 2012 SP1 Network Tasks
  • SCVMM 2012 SP1 Storage Tasks
  • Server Tasks
  • Service Container Tasks
  • Storage NFS Tasks
  • Sys Log Tasks
  • System Activity Tasks
  • System Setting Tasks
  • Tier3 VM Tasks
  • UCS Central Tasks
  • User and Group Tasks
  • VCE Vision Intelligent Operations Tasks
  • VDCProvisioning
  • VMware Datacenter Tasks
  • VMware Host Tasks
  • VMware Network Tasks
  • VMware Policy Tasks
  • VMware SRM Tasks
  • VMware Storage Tasks
  • VMware VM Tasks
  • Virtual SAN Tasks
  • Windows AD Tasks

Clear Contents of Search

Chapter: NetApp ONTAP Tasks

Abort netapp snapvault, add disk to netapp 7-mode aggregate, add existing initiator to netapp 7-mode igroup, add ip address to netapp 7-mode vfiler, add license to netapp 7-mode controller, add netapp 7-mode nfs export, add netapp 7-mode qtree nfs export, add netapp cifs volume share, add netapp initiator to initiator group, add netapp vfiler initiator to initiator group, add netapp vfiler nfs volume export, add quota to netapp 7-mode volume, add storage to netapp vfiler, assign vlan to netapp ip space, associate netapp 7-mode volume as vmware nfs datastore, associate netapp vfiler volume as nfs datastore, clone netapp lun, configure netapp snapmirror, configure netapp vlan interface, create netapp aggregate, create netapp flexible volume, create netapp ip space, create netapp initiator group, create netapp lun, create netapp qtree, create netapp snapmirror schedule, create netapp snapvault, create netapp volume snapshot, create netapp vfiler initiator group, create netapp vfiler lun, create netapp vfiler setup, create netapp vlan interface, create vfiler using netapp ontap, delete netapp aggregate, delete netapp ip space, delete netapp initiator group, delete netapp snapmirror schedule, delete netapp snapvault, delete netapp vfiler initiator group, delete quota, delete vlan interface, destroy netapp flexible volume, destroy netapp lun, destroy netapp qtree, destroy netapp vfiler lun, destroy netapp vfiler using ontap, execute netapp cli, get netapp partner info, map lun to netapp initiator group, map netapp vfiler lun to initiator group, modify netapp snapvault, modify netapp volume status, move netapp lun, netapp snapmirror destination actions, persist netapp network configuration, release netapp snapvault, remove ip address from netapp vfiler, remove netapp cifs volume share, remove netapp initiator from initiator group, remove netapp qtree nfs export, remove netapp volume nfs export, remove netapp vfiler initiator from initiator group, remove netapp vfiler nfs volume export, remove storage from netapp vfiler, resize netapp flexible volume, resize netapp lun, resize netapp vfiler lun, resize netapp vfiler volume, resize vm datastore(netapp), restore netapp snapvault, set netapp cifs volume share access, setup netapp cifs on vfiler, unmap netapp lun from initiator group, unmap netapp vfiler lun from initiator group, update netapp snapvault.

This chapter contains the following sections:

Was this Document Helpful?

Feedback

Contact Cisco

login required

  • (Requires a Cisco Service Contract )

netapp assign spare disk to aggregate

  • All Products
  • Sign in to view Support Bulletins

Knowledge Base

NetApp Knowledge Base

How to move assignment of spare disks from HA or DR partner node

  • Last updated
  • Save as PDF
  • FAS Systems
  • AFF Systems
  • MetroCluster
  • Ontap Select

Description

  • Spare disks are owned by individual nodes in a High Availability (HA) pair or MetroCluster configuration.
  • One node in a HA pair to the partner node
  • One node in a MetroCluster to either the HA node's local pool (pool0) or the DR/AUX node's remote pool (pool1)

IMAGES

  1. add disk spare to aggregate

    netapp assign spare disk to aggregate

  2. NetApp Clustered ONTAP 8.3

    netapp assign spare disk to aggregate

  3. Netapp how to create aggregate and volume

    netapp assign spare disk to aggregate

  4. NetApp ONTAP Disk Aggregates management » domalab

    netapp assign spare disk to aggregate

  5. How To Assign Disk Ownership In Netapp Cluster Mode

    netapp assign spare disk to aggregate

  6. Solved: Assigning Disks in a Cluster

    netapp assign spare disk to aggregate

VIDEO

  1. What is a Global Hot Spare

  2. 3 9 Create and Assign Spare Assets

  3. Qsan Disk Rebuild for RAID NAS Introduction

  4. Configure a Minimal NetApp StorageGrid Object Storage Cluster in VMs

  5. How to setup a Netapp SAN (Part 14 VLANS)

  6. 【NetApp基本講座】ストレージ探偵ず~みん/Web劇場版 「事件ファイル1-3 ファイルサーバー編~ハードディスクの保護とRAID~」

COMMENTS

  1. Add a disk as a spare to an aggregate

    As for the RAID size value. Yes you can change that if you want, however OnTap will not pull a spare disk into the aggregate, only a human will do that. In other words, OnTap will not grow the aggregate just because it has spares and the RAID size is greater than the allocated drive. disk.auto_assign will assign an unknown disk to a file server.

  2. Manually assign disk ownership

    Volume administration. Network management. NAS storage management. SAN storage management. S3 object storage management. Authentication and access control. Security and data encryption. Disks must be owned by a node before they can be used in a local tier (aggregate). If your cluster is not configured to use automatic disk ownership assignme...

  3. how to assign a spare disk to an aggregate

    how to assign a spare disk to an aggregate MohamedShehata ‎2021-02-13 08:43 AM. 2,529 Views Mark as New; Bookmark; Subscribe; Mute; Subscribe to RSS Feed; Permalink; Print; Report Inappropriate Content; hello, i am using a 7mode filer NetApp Release 8.2.5 7-Mode, it's model is FAS2552, i need to move a disk from the spare pool to an aggregate ...

  4. storage aggregate add-disks

    Description. The storage aggregate add-disks command adds disks to an existing aggregate. You must specify the number of disks or provide a list of disks to be added. If you specify the number of disks without providing a list of disks, the system selects the disks.

  5. Add spare disks to an aggregate

    Then, aggr add aggr0 -d 0a.00.5. sysconfig -r should show no spares, and another data drive in the aggregate, and you should see your additional space. Warning - if a drive fails, you don't have a spare and the system can't rebuild, so you'll want to watch this. If you lose two drives, the system enters a degraded state.

  6. Adding Spare disk to existing aggregate

    If you want to add a single disk, you will need to expand the size of the raid group, I'm assuming that it is currently 11. If you increase it to 12, you can add the single disk to one of the existing raid groups. The disk space will be automatically part of the available aggregate size after adding. I need some pointers on adding a spare disk ...

  7. How to unpartition disks and set to spare for use in a new aggregate

    The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customer's responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. This document and the information contained ...

  8. What are the best practices for adding disks to an existing aggregate

    For best performance, it is advisable to add a new RAID group of equal size to existing RAID groups. If a new RAID group cannot be added, then at a minimum, three or more disks should be added at the same time to an existing RAID group. This allows the storage system to write new data across multiple disks. A forced reallocate must be done to ...

  9. How to add disks to aggregate in ONTAP System ...

    The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customer's responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. This document and the information contained ...

  10. Solved: spare disk for aggregates

    Let say we have an aggregate (aggr0) with ADP and an aggregate (agg1) without ADP on the same node of a cluster, can we assign just one spare for both aggr0 and aggr1 or we need separate spare for each aggregate? Back in the days with 7-mode in 8.x we have a global spare and any aggregate with a failed disk will pick up the spare and use it.

  11. Manage aggregates

    In the working environment, click the Aggregates tab. On the Aggregates tab, navigate to the desired title and then click the … (ellipse icon). Manage your aggregates: Task. Action. View information about an aggregate. Under the … (ellipse icon) menu, click View aggregate details. Create a volume on a specific aggregate.

  12. Add disks to aggregate

    1. Select the working environment to use. Perform the workflow Get working environments and choose the publicId value for the workingEnvironmentId path parameter. 2. Select the aggregate. Perform the workflow Get aggregates and choose the name value for the aggregateName path parameter. 3. Add the disks. HTTP method.

  13. How to add disks to an existing data aggregate ...

    This article describes the process to grow aggregates with newly added disks into a cluster using root-data1-data2 partitions. This assumes the disk partitions will be added into an existing raidgroup. If a new raidgroup is needed, see How to add partitioned disks to a new raidgroup.

  14. Attach storage shelves and reassign disk ownership

    Verify the power supply and physical connectivity of the shelves. From the node3 LOADER prompt, boot to Maintenance mode: boot_ontap maint. Display the system ID of node3: disk show -v. *> disk show -v. Local System ID: 101268854. ... Record the system ID of node3 for use in Step 4 below.

  15. ONTAP 9 assign a disk as a spare

    7,292 Views. If it is a used disk that has partitions already, try to: Assign all partition. storage disk assign -disk <disk id> -owner nodename -force. storage disk assign -disk <disk id> -owner nodename -root true -force. storage disk assign -disk <disk id> -owner nodename -data true -force. Remove foreign aggregate if applicable.

  16. add disk spare to aggregate

    2012-09-12 10:25 PM. 10,726 Views. It is possible but could cause a slowdown in performance depending on how many disks are in the aggregate... You need at least 1 spare (2 is better) so adding 1 or 2 drives isn't typical best practice. We prefer to add a complete raid group at a time to an aggregate when growing it...

  17. Aggregate creation fails with message "is not a spare disk"

    Within the ONTAP CLI, the node appears to have spare disks available: cluster1::> storage aggregate show-spare-disks. Original Owner: cluster1-01 Pool0 Root-Data Partitioned Spares Local Local Data Root Physical Disk Type Class RPM Checksum Usable Usable Size Status

  18. Reassign disks to nodes with System Manager

    Steps. Click Storage > Aggregates & Disks > Disks. In the Disks window, select the Inventory tab. Select the disks that you want to reassign, and then click Assign. In the Warning dialog box, click Continue. In the Assign Disks dialog box, select the node to which you want to reassign the disks. Click Assign.

  19. How to add disks to a new partitioned raidgroup in a partitioned aggregate

    Description. When adding non-partitioned spare disks to a partitioned aggregate as a new RAID group, the non-partitioned spares are not automatically getting partitioned as expected in the new RAID group. Follow the steps provided to partition 1 non-partitioned spare , first. This will allow you to add the remaining non-partitioned spares.

  20. Expanding NetApp volume via spare disks

    Add a spare disk to an aggregate. (If this is what needs to be done in order to give an aggregate more free space) ... netapp> aggr status -s To view spares disks in the system. netapp> aggr status -f To view failed disks in the system. netapp> aggr add aggr0 xx.yy To add disk xx.yy to aggregate0 - look for output the command aggr status -s ...

  21. Auto-provisioning aggregate fails due to not enough spare disks with

    remaining spare disks and partitions after aggregate creation: ... NetApp provides no representations or warranties regarding the accuracy or reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any ...

  22. Cisco UCS Director Task Library Reference, Release 6.9

    Add Disk to NetApp 7-Mode Aggregate Summary Assign disk(s) to the aggregate. Description The available spare disk(s) can be assigned to an aggregrate. Inputs. Input Description Mappable To Type ... Select disks to be aggregated. netapp Spare Disk List:

  23. How to move assignment of spare disks from HA ...

    Description. Spare disks are owned by individual nodes in a High Availability (HA) pair or MetroCluster configuration. This article explains how to move assignment or ownership of the spare disks from either: One node in a HA pair to the partner node. One node in a MetroCluster to either the HA node's local pool (pool0) or the DR/AUX node's ...