App Controller to SQL firewall requirements

So here’s a quick one…

App Controller needs the following services/ports open on the SQL server it is configured to talk to during install:

  • SQL (duh!) (Port TCP 1433)
  • Remote Service Management (RPC SCManager UUID 367ABB81-9844-35F1-AD32-98F038001003)
  • Windows Management Instrumentation (WMI)

If these are open the App Controller installer can automatically detect the SQL instance and you don’t get the pesky “The specified database has insufficient space” error:

image

How to opt in to Microsoft Update with PowerShell

For Windows to retreive updates from Microsoft Update, as opposed to Windows Update (which is the default), you have to opt in. This is usually done through the Windows Update GUI by selecting “Get updates for other Microsoft Products”:

image

This becomes tedious very quickly so here’s how you can do it with PowerShell:

https://gist.github.com/morgansimonsen/8039966

To see if this was successful you can query the Services property of the $MU object:

$mu.Services

You should see something like this:

Name                  : Microsoft Update
ContentValidationCert : {}
ExpirationDate        : 11/18/2014 1:27:43 AM
IsManaged             : False
IsRegisteredWithAU    : True
IssueDate             : 11/18/2011 1:27:43 AM
OffersWindowsUpdates  : True
RedirectUrls          : System.__ComObject
ServiceID             : 7971f918-a847-4430-9279-4a52d1efe18d
IsScanPackageService  : False
CanRegisterWithAU     : True
ServiceUrl            : https://fe1.update.microsoft.com/v6/
SetupPrefix           : mu
IsDefaultAUService    : True

Name                  : Windows Update
ContentValidationCert : {}
ExpirationDate        : 6/18/5254 9:21:00 PM
IsManaged             : False
IsRegisteredWithAU    : False
IssueDate             : 1/1/2003 12:00:00 AM
OffersWindowsUpdates  : True
RedirectUrls          : System.__ComObject
ServiceID             : 9482f4b4-e343-43b6-b170-9a65bc822c77
IsScanPackageService  : False
CanRegisterWithAU     : True
ServiceUrl            :
SetupPrefix           :
IsDefaultAUService    : False

If you only have the Windows Update entry something went wrong.

One of the major benefits of this is that you can do it remotely. I have yet to find another way to do this. If you know, leave a comment.

Windows Azure compute cores and you

By default, every Windows Azure subscription has a limit of 20 compute cores. This applies to both VMs (IaaS) and worker roles (PaaS). Fortunately this is a soft limit and you can increase it by submitting a support ticket to the Windows Azure team through the management portal here. The following table lists the number of cores for each available Windows Azure instance:

Compute instance size CPU Cores
Extra Small 1/6 (Shared)
Small 1
Medium 2
Large 4
Extra Large 8

To see how many cores each of your cloud services are using just look in the management portal. You will se something like this:

image

Unfortunately you need to manually add all the cores for each cloud service to get the total number. To work around this limitation you can use PowerShell:

https://gist.github.com/morgansimonsen/8039470

Save the above as a PowerShell script and run it from a prompt with the Windows Azure PowerShell module added. It will return all your VMs, sorted by name, their instance sizes, the number of cores they each use and the total number of cores in use in your subscription.

If you are paying for your Windows Azure subscription with a credit card your credit limit is $15000 by default. This will let you increase to a total of 175 cores in your subscription. If you want more cores and still pay with a credit card you will have to go through a credit check process to increase your credit limit. If you move to an Enterprise Agreement there will be no need for any credit checks to increase your core count beyond 175.

For large deployments the Windows Azure Capacity Planning team will want information from you so that they can allocate the required resources in the regions you specify. Typical questions they will ask you is where (region) you want to deploy, if you can deploy in several regions, if you are creating a new deployment or growing an existing one, which VM sizes you require and other services, like CDN, SQL etc) that you may require.

How to upload an image VHD to Windows Azure using CSUpload

Introduction

One of the exciting new possibilites in Windows Azure Infrastructure as a Service (IaaS) is the ability to create Virtual Machine images (VHD files) locally and upload them to your own image store in Windows Azure, and use them as the basis for new VMs. In this article I will demonstrate the process of uploading such an image and creating a virtual machine from it in Windows Azure.

CSUpload

There are many Cloud Storage explorers out there that can connect to Windows Azure blog storage to upload files. In this case I will use CSUpload.exe from Microsoft which is part of the Windows Azure SDK. It looks to have the best security and is specifically designed to upload images to Windows Azure. After you have installed the Windows Azure SDK the easiest way to locate CSUpload is to start the Windows Azure Command Prompt and change to the bin directory.

Disks

You can have three types of VHD files, or disks, in Windows Azure:

  • OS Images: These are generalized, read-only, base disks used for creating new virtual machines. They are either provided by Microsoft, third parties or you.
  • OS Disks: These are specialized, dedicated, writable disks that serve as the OS disk for a specific machine. If you have migrated a VM from your local virtualiztion platform to run it unmodified in Windows Azure you will use this kind of disk. Max size for an OS disk is 127 GB.
  • Data Disks: Writable disks that do not contain an operating system. They can either be migrated from your local network or created from Windows Azure. Max size for a data disk is 1 TB. The VM size determines how many data disks you can attach.

CSUpload can upload virtual disks with either the VHD or AVHD extension, meaning you can upload both parent (VHD) and child (AVHD) disks.

Setting up

The first thing we need to do is create a connection string for CSUpload to user for connecting to your Windows Azure subscription. You do this with the CSUpload Set-Connection command. (As you can see CSUpload borrows from the page of PowerShell with its verb-noun command structure. Hopefully this means that it will be replaced by native PowerShell cmdlets in the near future.) CSUpload Set-Connection takes exactly one argument which is a string in quotes consisting of your Windows Azure Subscription ID, Certificate Thumbprint and Service Management Endpoint. These are most easily obtained by running the Get-AzureSubscription cmdlet from a PowerShell prompt where the Windows Azure module is loaded. Here is an example.The strings marked in red are the ones you need to use in CSUpload:

image

Here’s how the CSUpload command would look (wrapped for readability):

csupload.exe set-connection
“SubscriptionId=3cf46281-f639-44bc-a338-11697697bb2a
;CertificateThumbprint=C525C2AA4B587506F851A7EF5456068270ECC969
;ServiceManagementEndpoint=https://management.core.windows.net”

Windows(R) Azure(TM) Upload Tool version 1.7.0.0
for Microsoft(R) .NET Framework 3.5
Copyright (c) Microsoft Corporation. All rights reserved.

Updated the default connection string.

 

Once you have configured this, CSUpload will remember it until you do something to change your subscription, say delete your management certificate etc. You could also specify these parameters each time you run one of the other CSUpload commands, but that gets old real quick. Better to define them once.

The upload

To upload a disk CSUpload requires four mandatory parameters;

  • Destination: the storage account and path where the disk will be stored in Windows Azure. This is consists of two parts; the URL of the storage account, which you can get either from the portal or the cmdlet Get-AzureStorageAccount, the path where you want to put the VHD in that storage account, and the name of the VHD, including the .vhd extension.
  • Label: The disk’s label, which is displayed when you create new VMs in Windows Azure.
  • LiteralPath: The location of the VHD/AVHD file to upload, this is on your local disk.
  • OS: If the disk contains an operating system, specify either Windows or Linux.

Here is an example (wrapped for readability):

csupload.exe Add-PersistentVMImage
-Destination
http://mdsstgacc01.blob.core.windows.net/myimages/base_win7sp1x64.vhd
-Label “Windows 7 SP1”
-OS Windows
-LiteralPath “d:Hyper-VVirtual Hard Disksbase_win7sp1x64.vhd” –Name basewin7sp1x64.vhd

The command above produces the following output:

Windows(R) Azure(TM) Upload Tool version 1.7.0.0
for Microsoft(R) .NET Framework 3.5
Copyright (c) Microsoft Corporation. All rights reserved.

Using the saved connection string…
MD5 hash is being calculated for the file ‘d:Hyper-VVirtual Hard Disksbase_win7sp1x64.vhd’.
MD5 hash calculation is completed.ning Time: 00:00:00; Throughput: 2694,3Mbps
Elapsed time for the operation: 00:00:15
Creating new page blob of size 5368709632…
Elapsed time for upload: 00:00:00
Registering the image ‘basewin7sp1x64.vhd’ with repository.
Image ‘basewin7sp1x64.vhd’ is registered successfully.

Let’s look at what happens…

1. MD5 Hash

CSUpload starts by generating an MD5 hash of your file to gurantee that it has been transferred correctly.CSUpload will report its progress as a percentage and display the speed at which it reads data from you local disk. This number is given in Mbps. No actual transfer of data has started yet.

MD5 hash is being calculated for the file ‘d:Hyper-VVirtual Hard Disksbase_win7sp1x64.vhd’.
MD5 hash calculation is completed.ning Time: 00:00:00; Throughput: 2694,3Mbps
Elapsed time for the operation: 00:00:15

2. Allocate blob

Once the MD5 calculation is finished CSUpload will allocate a blob of the same size as the disk in your storage account.

Creating new page blob of size 5368709632…
Elapsed time for upload: 00:00:00

3. Transfer data

Now CSUpload will transfer the contents of the disk to Windows Azure. Note that only blocks in the VHD that actually contain data will be transferred, empty blocks are skipped. If you have a large disk you will see CSUpload use some time maping which blocks contain data.

Uploading: 2,2% complete; Remaining Time: 00:10:29; Throughput: 18,9Mbps
Elapsed time for upload: 00:04:22

4. Register in repository

We are using the CSUpload command Add-PersistentVMImage, which means we are uploading something that can be the basis of new VMs in Windows Azure IaaS. We need to tell Widnows Azure that it is dealing with an image so CSUpload will register the image in your subscriptions image repository. This means it will be visible both under Images in the Virtual Machine node and in your blob storage account.

Registering the image ‘basewin7sp1x64.vhd’ with repository.
Image ‘basewin7sp1x64.vhd’ is registered successfully.

image
Image in the image repository.

image
Image in the storage account.

Notes

  • I have a 50 Mbps fiber optic connection at home, and I was able to max it out while uploading the VHD to Windows Azure. It seems there is no upload limit into the datacenter when performing this kind of operation. VM sizes set a cap on bandwidth, but since the VHD is not yet associated with any VM it seems there is no limit. Further investigation required…
  • The VHD file must have a size that is a mutiple of 1024 bytes. Or else you will get this error:
    The VHD https://mdsstgacc01.blob.core.windows.net/myimages/base_win7sp1x64.vhd has an unsupported virtual size of 500000
    0000 bytes.  The size must be a whole number (in MBs).
  • For you so successfully connect to a new VM based on your image, any image you upload must have Remote Desktop enabled and allowed through the host based firewall.
  • The local Administrator account must be enabled and must have a password set. Later versions of Windows do not allow logon through Remote Desktop with a blank password.
  • You should use HTTPS when specifying you blob storage. HTTP could leak data during transfer, which CSUpload will warn you about:Use of http is insecure, can leak data to untrusted viewers and make your content vulnerable to data tampering by a malicious third party.

UPDATE: As of December 2012 a new cmdlet has been included in the Windows Azure PowerShell module; Add-AzureVHD. It performs the same functions as CSUpload with regards to uploading images and disks. Get the latest PS module here:

https://github.com/WindowsAzure/azure-sdk-tools/downloads

More free stuff in Windows Azure Active Directory

Microsoft announced today that they will offer two features in Windows Azure Active Directory free of charge; Access Control and Core Directory and Authentication.

  • Access Control provides centralized authentication and authorization by integrating with consumer identity providers such as Facebook or by using an on-premises deployment of the Windows Server Active Directory service. With Access Control, you can create a single application that can allow users to sign in with both their organizational credentials stored in Windows Azure Active Directory or Windows Server Active Directory, or by using popular consumer identity services like Microsoft Account, Facebook, Google, or Twitter. Historically, Access Control has been priced by number of transactions, but now it is available at no charge.
  • Core Directory and Authentication enables capabilities such as single sign-on, user and group management, directory synchronization, and directory federation. These capabilities are currently free of charge in the Windows Azure Active Directory Developer Preview and will remain free of charge after Windows Azure Active Directory reaches general availability.

Good times!

Which accounts can I assign as Windows Azure co-administrators?

Recently the ability to add co-administrators to you Windows Azure portal was migrated from the old Silverlight portal to the new HTML 5 portal. Happy times!

You can only add valid Microsoft Accounts or accounts from Windows Azure Active Directory as co-administrators. Office 365 uses Windows Azure Active Directory to if you have an account there it can be assigned as a co-administrators. If you enter an invald account the portal will inform you:

image

Decoding some certificate enrollment client events

The Events

Windows clients can request certificates on their own (autoenrollment) or with the help of a user. When enrolling for a certificate you may find this pair of events in your Application log:

Source: Microsoft-Windows-CertificateServicesClient-CertEnroll
Event ID: 64
Level: Information
User: S-1-5-18
User Name: NT AUTHORITYSYSTEM
Computer: <servername>
Description: Certificate enrollment for Local system successfully load policy from policy server

Source: Microsoft-Windows-CertificateServicesClient-CertEnroll
Event ID: 65
Level: Information
User: S-1-5-18
User Name: NT AUTHORITYSYSTEM
Computer: <servername>
Description: Certificate enrollment for Local system is successfully authenticated by policy server {AFD04357-74D7-47B3-82BC-BBE76F4E6F3D}

Obviously the local system successfully enrolled for a certificate, but what do these actually tell us?

Decoding

Let’s start with the first event with if 65 (remember, the newest events are at the top of the event log). It tells us that a CA, included in a certificate enrollment policy, identified by GUID, has successfully authenticated the system’s enrollment request. In the Windows help we find this about certificate enrollment policies:

Certificate enrollment policy provides the locations of certification authorities (CAs) and the types of certificates that can be requested. Organizations that are using Active Directory Domain Services (AD DS) can use Group Policy to provide certificate enrollment policy to domain members by using the Group Policy Management Console to configure the certificate enrollment policy settings. The Certificates snap-in can be used to configure certificate enrollment policy settings for individual client computers unless the Group Policy setting is configured to disable user-configured enrollment policy.

Unless you have added some yourself, there is one default certificate enrollment policy, the Active Directory Enrollment Policy.

To display the enrollment policies you can run certutil.exe –Policy. It will give you a list of your enrollment policies:

Name: Active Directory Enrollment Policy
Id: {AFD04357-74D7-47B3-82BC-BBE76F4E6F3D}
Url: ldap:
NextUpdate: EMPTY
LastUpdate: EMPTY
Url Flags = 15 (21)
PsfLocationGroupPolicy — 1
PsfAutoEnrollmentEnabled — 10 (16)
0x4 (4)
Authentication = 2
Kerberos — 2
AllowUntrustedCA: 0
Priority: 7ffffffd (2147483645)
CertUtil: -Policy command completed successfully.

If you look back at the events at the beginning of this post you will recognize the GUID in the event with ID 65. The GUID in the event is the Enrollment Policy ID. So that event is saying that the system was successfully authenticated against the Active Directory Enrollment Policy. Here is a screenshot of the policy as well:

image

NOTE: If you want to see the defalt certificate enrollment policy and create new ones you need to use Group Policy. You can also create new certificate enrollment policies using the Certificates MMC. See the links below:

Now, if you want to see which CAs are in that enrollment policy you can run certutil.exe –CA (my output is concatenated):

Name: Active Directory Enrollment Policy
Id: {AFD04357-74D7-47B3-82BC-BBE76F4E6F3D}
Url: ldap:
6 CAs:

CA[0]:
CAPropCommonName = <CA Name>
CAPropDNSName = <CA Server FQDN>
CAPropCertificateTypes =
0: EFSRecovery
1: EFS
2: DomainController
3: WebServer
4: Machine
5: User
6: SubCA
7: Administrator

CAPropSecurity = <SDDL>

Without the Certificate Enrollment Policy Web Service role service installed, the only way to get certificate policy information from Active Directory is by using LDAP. This can obviously be a problem so the Certificate Enrollment Policy Web Service role service was created to allow certificate policy information to be retreived over HTTPS also. The Web Service functions as a proxy; accepting client requests for policy over HTTPS and querying Active Directory for certificate policy information over LDAP.

So now we know what the first event (ID 65) means.

The next event is much easier; it just says that the system was able to load the certificate enrollment policy successfully from the CA.

More information:

Enabling notify-driven replication across Active Directory sites

Introduction

Active Directory sites, site links and site link bridges exist to be able to tell the directory service about the network’s physical topology. Specifically to identiy which parts of it are well connected and where there are slow WAN links. The rule of thumb here is that all systems that are connected with at backbone of 10 Mbps or more are considered well connected. These should therefore be contained in a site. Two or more sites of well connected systems can be linked with site links that have a schedule and an associated cost. The cost can be said to represent how fast the link is. Low cost would cause the directory to prefer it while a high cost would be more prohibitive. The way we identify systems in Active Directory is by either IPv4 or IPv6 subnets. A list of IP subnets define a site. Needless to say; the exact same IP subnet cannot be associated with more than one site within the same forest.

However, this functionality also has another application. It can be used to created advanced replicaton topologies that segment your Active Directory.

Segmentation of Active Directory

Whenever you need to control the communication paths between domain controllers you need sites and site links. In these situations, network speed is not the issue, but rather where communication is allowed. The default topology of Active Directory replication within a site is a least-cost spanning tree. If there are any firwall or blocked communication paths between any DCs the directory cannot know about this without defined sites and site links. If you use only one site in such a situation you will not have a functional replication topology.

Let’s say you have a network comprising of two zones with different security levels (Figure 1). Typically these two zones would be segmented by a firewall. However, to make administration and user experience easier and better you want to use the same domain across the zones. If you have 4 DCs in each zone, for a total of 8 DCs in the domain you would have to allow communication from all DCs in one zone to all DCs in the other zone and vice versa if you had only one site. With two sites, one for each security level/zone, with one or two bridgehead servers defined in each site, you would have much fewer openings in your firewall.

Figure 1

This is also very useful for Active Directory aware clients and services who try to use the DCs closest to them. Returning to our example of a network with two zones and a firewall separating them, clients in one zone would not be allowed to access DCs in the other. Clients from a zone with a lower security level typically cannot access resources in higher security levels. Depending on the circumstances the reverse may also be true. Clients from a higher security level cannot access resources on a lower level. With only one site spanning the two zones clients in either zone has a 50 % chance of trying to contact a DC they cannot communicate with (if there is an equal number of DCs in each zone). This is because they think that any DC in the site is available. On the other hand with a correctly defined site topology where the correct IP subnets are associated with the correct site, clients will not attempt to communicate with a DC in another site unless all the DCs in their own site are down.

But there is one challenge with this approach, and that is the speed of replication between sites. In our scenario network speed is not an issue. These configrations are typically segmented LANs with high speed backbones and firewalls. However, Active Directory regards sites as well connected and the links between sites as much slower. Therefore site links only replicate on a schedule, not by using notifications as DCs do within a site. The lowest replication interval you can set on a site link is 15 minutes, meaning that Active Directory updates are replicated across the site link every 15 minutes.

Within a site replication is notificaton-driven. A DC with an originating (a write that is done on the local DC) or replicated (a write done on the local DC but as the result of replication of an update from another DC) update will send a notification message to all its intra site replication partners that it has changes that need to be replicated. The DCs receiving these messages will in turn connect to the originating DC and pull the changes and update their own copies of the database. All Active Directory replication is pull-based meaning that a DC will never push its changes over to any other DC. It will just notify them and they initiate a pull to replicate any changes. This is also true for inter site replication, except between sites there are no notifications, but a schedule and a replication interval (Figure 2).

Figure 2

Each time the interval is reached the DCs will initiate pull replication with their configured parteners across the site links. This works this way because site links are thought to be slow and costly. Thus Active Directory does not want to monopolize them with a lot of singe obejct updates, but rather replicate a lot of objects in a queue at specific intervals. It is this behaviour we want to modify to make Active Directory updates as speedy between sites as they are within sites. Remember, in our scenario network speed is not an issue, we just want to segment our domain.

Configuring site links to use notification based replication

Whether to use notification or a replication interval to replicate across a site link is configured pr. site link. If all your sites are in the same site link configuration is simple, but if you have several sites links, in a hub-spoke topology e.g., you will have to configure each site link.

In this example we will configure the DEFAULTIPSITELINK site link that is created automatically in every Active Directory forest and contains all sites by default.

You can make the change using any tool able to write to Active Directory attributes.  I will use Active Directory Sites and Services since that is the easiest way. The change itself requires Enterpise Admin level permissions.

  1. Find the site link you want to change, right click and select Properties.
  2. Select the Attributes Editor tab.
  3. Find the options attribute (make sure that you display all attributes regardless of whether they have a value or not, since this attribute does not have a value by default).
  4. Set the value of the options attribute to 1. Notice how you will now have an explanation behind the 1 value indicating that you have configured the link to use notifications (USE_NOTIFY).
  5. Replicate this change to all DC manually or wait for regular scheduled Active Directory replication.

image

The options attribute uses a bitmap. Its possible values are:

Decimal Value Binary Value Explanation
1 1 USE_NOTIFY
2 10 TWOWAY_SYNC
4 100 DISABLE_COMPRESSION

You can use any combination of these. If your options attribute already has a value you need to perform a BITWISE OR operation on the existing value. If the value is 4, convert that to binary (100), perform an OR operation with binary 1, the result should be binary 101, which you convert to decimal (5) and enter as the value of the options attribute.

More information