2. Mike Tutkowski
• Lead Open Source Developer @SolidFire
• Dedicated to CloudStack Development
McClain Buggle
• Strategic Alliance Manager @SolidFire
Who are these guys?
4. Cloud is happening NOW
Pain points are REAL
LACK of viable alternatives
Opportunity is MASSIVE
AWS is not standing still
Why the Urgency Around CloudStack?
9. x86 Virtualization
This movie
ended well…
x86 Virtualization – From Test/Dev to Production
0
200
400
600
800
1000
1200
1400
1600
1800
2000
2002 2003 2004 2005 2006 2007 2008 2009 2010 2011
x86 Virtualiza on -- VMware License Revenue ($M)
10. Cloud
Computing
This ending
is still being
written The Test/Dev Era
The Produc on Era
The Production Era Opportunity: x86 Virtualization vs. Cloud
11. How do we
influence the
outcome?
Margin
High
Med
Low
IOPS
Low Hig
h
Performance Sensitive Apps
CRM / ERP / Database
Messaging / Productivity
Desktop
Performance
Sensitivity
$$$
$$
$
Applications
Dev / Test
Backup / Archive
Key Cloud
Infrastructure
Innovations
• Availability
• Performance
• Quality-of-Service
• Scalability
• Automation
12. • Storage is a major pain-point in most early-cloud deployments
• Unpredictable Performance
• Not designed for Multi-tenancy
• Storage a key underpinning to successful application deployments
• Today = Backup/Archive, Dev/Test
• Tomorrow = Mission & Business Critical Applications
What does this have to do with CloudStack?
15. CloudStack was not designed for dynamic provisioning and does not leverage vendor
unique storage features within the framework.
For SolidFire we are interested in features that allow users to select minimum,
maximum, and burst IOPS for a given volume.
Use Cases for a CloudStack Plug-In
16. Ability to defer the creation of a volume until the moment the end user elects to execute
a Compute or Disk Offering.
Still have CS Admin configure the Primary Storage, but now it is based on a plug-in
instead of on a pre-existing storage volume.
No requirement on part of the CSP to write orchestration logic.
My Specific Needs from the Plug-in
17. A CloudStack storage plug-in is divided into three components:
Provider: Logic related to the plug-in in general (ex: name of plug-in).
Life Cycle: Logic related to life cycle (ex: creation) of a given storage system (ex: a single SolidFire
SAN).
Driver: Logic related to creating and deleting volumes on the storage system.
Must add a dependency in the client/pom.xml file as such:
<dependency>
<groupId>org.apache.cloudstack</groupId>
<artifactId>cloud-plugin-storage-volume-solidfire</artifactId>
<version>${project.version}</version>
</dependency>
So…how do you actually make a plug-in?
18. Must implement the PrimaryDataStoreProvider interface.
Provides CloudStack with the plug-in's name as well as the Life Cycle and Driver
objects the storage system uses.
Must be listed in the applicationContext.xml.in file (Spring Framework related).
A single instance of this class is created for CloudStack.
Provider – About
19. public interface PrimaryDataStoreProvider extends DataStoreProvider {
}
public interface DataStoreProvider {
public static enum DataStoreProviderType {
PRIMARY,
IMAGE
}
public DataStoreLifeCycle getDataStoreLifeCycle();
public DataStoreDriver getDataStoreDriver();
public HypervisorHostListener getHostListener();
public String getName();
public boolean configure(Map<String, Object> params);
public Set<DataStoreProviderType> getTypes();
}
Provider – Interface
20. public class SolidfirePrimaryDataStoreProvider implements PrimaryDataStoreProvider {
private final String providerName = "SolidFire";
protected PrimaryDataStoreDriver driver;
protected HypervisorHostListener listener;
protected DataStoreLifeCycle lifecyle;
@Override
public String getName() { return providerName; }
@Override
public DataStoreLifeCycle getDataStoreLifeCycle() { return lifecyle; }
@Override
public boolean configure(Map<String, Object> params) {
lifecyle = ComponentContext.inject(SolidFirePrimaryDataStoreLifeCycle.class);
driver = ComponentContext.inject(SolidfirePrimaryDataStoreDriver.class);
listener = ComponentContext.inject(DefaultHostListener.class);
return true;
}
Provider – Implementation
21. Notes:
client/tomcatconf/applicationContext.xml.in
Each provider adds a single line.
“id” is only used by Spring Framework (not by CS Management Server). Recommend just providing a descriptive
name.
Example:
<bean id="ClassicalPrimaryDataStoreProvider"
class="org.apache.cloudstack.storage.datastore.provider.CloudStackPrimaryDataStoreProviderImpl" />
<bean id="solidFireDataStoreProvider"
class="org.apache.cloudstack.storage.datastore.provider.SolidfirePrimaryDataStoreProvider" />
Provider – Configuration
22. Must implement the PrimaryDataStoreLifeCycle interface.
Handles the creation, deletion, etc. of a storage system (ex: SAN) in CloudStack.
The initialize method of the Life Cycle object adds a row into the cloud.storage_pool
table to represent a newly added storage system.
Life Cycle – About
23. public interface PrimaryDataStoreLifeCycle extends DataStoreLifeCycle {
}
public interface DataStoreLifeCycle {
public DataStore initialize(Map<String, Object> dsInfos);
public boolean attachCluster(DataStore store, ClusterScope scope);
public boolean attachHost(DataStore store, HostScope scope, StoragePoolInfo existingInfo);
boolean attachZone(DataStore dataStore, ZoneScope scope);
public boolean dettach();
public boolean unmanaged();
public boolean maintain(DataStore store);
public boolean cancelMaintain(DataStore store);
public boolean deleteDataStore(DataStore store);
}
Life Cycle – Interface
24. @Override
public DataStore initialize(Map<String, Object> dsInfos) {
String url = (String)dsInfos.get("url");
String uuid = getUuid(); // maybe base this off of something already unique
Long zoneId = (Long)dsInfos.get("zoneId");
String storagePoolName = (String) dsInfos.get("name");
String providerName = (String)dsInfos.get("providerName");
PrimaryDataStoreParameters parameters = new PrimaryDataStoreParameters();
parameters.setHost("10.10.7.1"); // really get from URL
parameters.setPort(3260); // really get from URL
parameters.setPath(url);
parameters.setType(StoragePoolType.IscsiLUN);
parameters.setUuid(uuid);
parameters.setZoneId(zoneId);
parameters.setName(storagePoolName);
parameters.setProviderName(providerName);
return dataStoreHelper.createPrimaryDataStore(parameters);
}
Life Cycle – Implementation
25. Must implement the PrimaryDataStoreDriver interface.
Your opportunity to create or delete a volume and to add a row to or delete a row
from the cloud.volumes table.
A single instance of this class is responsible for creating and deleting volumes on all
storage systems of the same type.
Driver – About
26. public interface PrimaryDataStoreDriver extends DataStoreDriver {
public void takeSnapshot(SnapshotInfo snapshot, AsyncCompletionCallback<CreateCmdResult> callback);
public void revertSnapshot(SnapshotInfo snapshot, AsyncCompletionCallback<CommandResult> callback);
}
public interface DataStoreDriver {
public String grantAccess(DataObject data, EndPoint ep);
public boolean revokeAccess(DataObject data, EndPoint ep);
public Set<DataObject> listObjects(DataStore store);
public void createAsync(DataObject data, AsyncCompletionCallback<CreateCmdResult> callback);
public void deleteAsync(DataObject data, AsyncCompletionCallback<CommandResult> callback);
public void copyAsync(DataObject srcdata, DataObject destData, AsyncCompletionCallback<CopyCommandResult> callback);
public boolean canCopy(DataObject srcData, DataObject destData);
public void resize(DataObject data, AsyncCompletionCallback<CreateCmdResult> callback);
}
Driver – Interface
28. Ask the CS MS to provide a list of all storage providers
http://127.0.0.1:8080/client/api?command=listStorageProviders&type=primary&response=json
Ask the CS MS to add a Primary Storage (a row in the cloud.storage_pool table) based on your
plug-in (ex: make CloudStack aware of a SolidFire SAN)
http://127.0.0.1:8080/client/api?command=createStoragePool&scope=zone&zoneId=a7af53b4-ec15-
4afc-a9ee-
8cba82b43474&name=SolidFire_831569365&url=MVIP%3A192.168.138.180%3BSVIP%3A10.10.7.1
&provider=SolidFire&response=json
Ask the CS MS to provide a list of all Primary Storages
http://127.0.0.1:8080/client/api?command=listStoragePools&response=json
API Calls
29. Need support for root disks. At the moment, the framework is mainly focused on data
disks.
Need code to create datastores on ESX hosts and shared mount points on KVM hosts
(we already have logic to create storage repositories on XenServer hosts).
Speaking in terms of XenServer (but true for other hypervisors), when a volume is
attached or detached, we need logic in place that handles zone-wide storage.
No GUI support yet to add a provider...must be done with the API.
What’s left to do?