SATA supports three power management states (SLUMBER, PARTIAL and MAX) and these are tunable via linux /sys file system. The specification states when drives are at PARTIAL and SLUMBER states, it need to switch to MAX state in about 10 us to 10 ms. If you add SW overhead the actual time to switch drive from SLUMBER to MAX is much higher. But power savings are considerable and its worth power managing the drives.
Linux Aggressive Link Power Management (ALPM) is a power-saving technique that helps the disk save power by setting a SATA link to the disk to a low-power setting during idle time. ALPM automatically sets the SATA link back to an active power state once I/O requests are queued to that link.
For example, user can set "/sys/class/scsi_host/host*/link_power_management_policy" to min_power, medium_power & max_performance. Each of above states correspond to SLUMBER, PARTIAL and MAX power states.
SATA PHY are based of CMOS digital logic and it uses almost no power in static condition. However the logic consumes power if gates switch logic leading to Higher Speed = More state transitions = Higher Power. Following state diagram describes SATA power transitions states in detail.
Thursday, June 23, 2011
Tuesday, June 21, 2011
Linux library "libsas.so"
Underlying storage protocol has vastly changed since original Linux implementation. During early 90's SCSI and ATA were the primary mass storage protocol. As Linux evolved catching with modern storage interfaces like SAS, SATA, FCoE there was some design changes made in the Storage Stack. We will briefly discuss about "libsas.so" which is a shared by Low Level Drivers (LLD) and the SCSI Storage Stack. Why we need libsas interface to begin with ?.
1) SAS interface is dynamic and device can appear and disappear over time. From user/kernel perspective, device node need to be created or deleted dynamically such as mass storage nodes /dev/sdX or generic nodes such as /dev/sgX which are need to access SCSI Enclosure Service (SES) devices
2) SAS Physical layer is similar to Ethernet interface where various physical layer statistics (similar to netstat) can be exported to kernel/user space
3) During early days of SAS 1.0, Expander routing table need to be configured by external HBA/Initiator. This has changed since SAS 2.0 since Expanders are self configurable
Above tasks can be implemented by each LLD in a proprietary way (some HBA does this) or the designer of HBA LLD driver can use common SAS libraries to configure SAS dynamic events outlined above.
There are many good reference texts which outlines Linux SCSI Storage Subsystem in great details. For sake of completeness I've drawn a block diagram which depicts the Linux storage stack.
When new HDD or Expander are added to the SAS fabric these devices broadcast certain packets or primitives. These are received by HBA LLD driver and it need appropriate action/responses based on its contents. For example, if HDD is hot-plugged it will broadcast IDENTIFY packet identifying itself as a Harddisk. If such packet received by the LLD driver, it need to create device node on the file system such as /dev/sdX. So how this information is passed to the kernel ?. Following struct contains various fields (libsas.h) which are populated by LLD with exception of three function pointer identified below
struct sas_ha_struct {
..
..
void (*notify_ha_event)(struct sas_ha_struct *, enum ha_event);
void (*notify_port_event)(struct sas_phy *, enum port_event);
void (*notify_phy_event)(struct sas_phy *, enum phy_event);
struct ash_sas_phy **sas_phy; // <--- frame_recvd and sas_prim
..
};
When LLD driver registers with kernel using "sas_register_ha(struct sas_ha_struct *)" above function pointer will be valid after successful registration with the kernel.
1) SAS interface is dynamic and device can appear and disappear over time. From user/kernel perspective, device node need to be created or deleted dynamically such as mass storage nodes /dev/sdX or generic nodes such as /dev/sgX which are need to access SCSI Enclosure Service (SES) devices
2) SAS Physical layer is similar to Ethernet interface where various physical layer statistics (similar to netstat) can be exported to kernel/user space
3) During early days of SAS 1.0, Expander routing table need to be configured by external HBA/Initiator. This has changed since SAS 2.0 since Expanders are self configurable
Above tasks can be implemented by each LLD in a proprietary way (some HBA does this) or the designer of HBA LLD driver can use common SAS libraries to configure SAS dynamic events outlined above.
There are many good reference texts which outlines Linux SCSI Storage Subsystem in great details. For sake of completeness I've drawn a block diagram which depicts the Linux storage stack.
When new HDD or Expander are added to the SAS fabric these devices broadcast certain packets or primitives. These are received by HBA LLD driver and it need appropriate action/responses based on its contents. For example, if HDD is hot-plugged it will broadcast IDENTIFY packet identifying itself as a Harddisk. If such packet received by the LLD driver, it need to create device node on the file system such as /dev/sdX. So how this information is passed to the kernel ?. Following struct contains various fields (libsas.h) which are populated by LLD with exception of three function pointer identified below
struct sas_ha_struct {
..
..
void (*notify_ha_event)(struct sas_ha_struct *, enum ha_event);
void (*notify_port_event)(struct sas_phy *, enum port_event);
void (*notify_phy_event)(struct sas_phy *, enum phy_event);
struct ash_sas_phy **sas_phy; // <--- frame_recvd and sas_prim
..
};
When LLD driver registers with kernel using "sas_register_ha(struct sas_ha_struct *)" above function pointer will be valid after successful registration with the kernel.
LLD driver can call (*notify_)() function pointer to be processed by SAS library based on actual events/information received from the SAS fabric. For example, if the SAS PHY is down or broken LLD driver will call (*notify_phy_event)() with appropriate phy event. The SAS library tears down the link and removes appropriate device node in the /dev file system to reflect the state of the SAS fabric.
Subscribe to:
Posts (Atom)