That might just be a bug with the "aggr status -r" output, not sure, I'd have to research it more.  However, the "sysconfig -r" shows rg1 correctly.

    RAID group /aggr0/plex0/rg1 (double degraded, block checksums)

      RAID Disk    Device    HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
      ---------    ------    ------------- ---- ---- ---- ----- --------------    --------------
      dparity    FAILED        N/A                        272000/ -
      parity    FAILED        N/A                        272000/ -
      data        0a.33    0a    2   1   FC:A   0  FCAL 15000 272000/557056000  274845/562884296
      data        0a.26    0a    1   10  FC:A   0  FCAL 15000 272000/557056000  274845/562884296

Anyway, comparing the last ASUP to the output from maintenance mode, it looks like the last disk to fail was 0a.44.  When that disk failed it took the aggr offline due to 3 failed disks in a single RAID group.

LAST ASUP:
Aggregate aggr0 (online, raid_dp, degraded) (block checksums)
  Plex /aggr0/plex0 (online, normal, active)
    RAID group /aggr0/plex0/rg0 (double degraded, block checksums)

      RAID Disk    Device    HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
      ---------    ------    ------------- ---- ---- ---- ----- --------------    --------------
      dparity     0a.44    0a    2   12  FC:A   0  FCAL 15000 272000/557056000  274845/562884296
      parity      0a.48    0a    3   0   FC:A   0  FCAL 15000 272000/557056000  280104/573653840
      data        0a.49    0a    3   1   FC:A   0  FCAL 15000 272000/557056000  280104/573653840
      data        0a.16    0a    1   0   FC:A   0  FCAL 15000 272000/557056000  274845/562884296

MAINT MODE:

      RAID Disk Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

      --------- ------  ------------- ---- ---- ---- ----- --------------    --------------

      dparity   FAILED          N/A                        272000/ -

      parity    0a.48   0a    3   0   FC:A   0  FCAL 15000 272000/557056000  280104/573653840

      data      0a.49   0a    3   1   FC:A   0  FCAL 15000 272000/557056000  280104/573653840

      data      0a.16   0a    1   0   FC:A   0  FCAL 15000 272000/557056000  274845/562884296

IF, and that's a big IF...you can unfail disk 0a.44 you might be able to get the aggr back online.  Now, once the aggr is online and the controller boots up you're gonna want to have some spares in there so some reconstructs can start.  I would expect disk 0a.44 to fail again at some point in the near future.  Hopefully you can get it to stay online long enough for some recons to finish in rg0.  Otherwise, you're looking at a panic and the controller going down again.

What's your spare situation on the partner controller?  Can you assign a few to this controller?
Also, do you have backups?  (just in case)