Archive for the ‘Startup’ Category

Wednesday, June 19th, 2013

stock.xchng - Student Handbook 001 (stock photo by SP_AL_UK)Over the years I’ve collected some rules I keep in my head when I am coding. Some I’ve come up with on my own, others I’ve stolen from @jonwagnerdotcom and @jbright, books have provided some, and other I completely forgot where I’ve picked them up. Feel free to use them in your own head while you code.

  1. THINK

    Above all else think about what you are doing. Don’t blindly follow patterns. Make sure what you are doing makes sense. Trust your own brain.

  2. Code different things differently, same things the same.
    Don’t force a DRY pattern on something that is really different, but if the pattern is really the same use the same code for it.
  3. Better is the enemy of done.
    Code as best you can, but don’t be afraid to ship it. Code is a tool to be used. Nobody can use your code if you don’t let them.
  4. Unwritten code has no bugs.
    Don’t write code unless you have to. Nobody is perfect; your code will have unforeseen effects. Determine before hand if the problem really needs code to be solved.
  5. Do not repeat yourself (DRY).
    Make sure your code is neat and separated so it can be reused. Don’t type (or copy/paste) the same code twice. Actually any time you copy/paste code it better be for a good reason.
  6. Don’t be afraid to delete code, it’s in source control.
    (It is in source control right? Seriously it better to use bad source control over no source control). Undeleted code just clogs up the code base. Delete code that isn’t needed and lean on source control for history. Too may times have I see old code hang around because nobody was sure why it was needed in the first place.
  7. It’s just bits and bytes.
    Don’t be afraid to refactor. The raw materials for code are cheap.
  8. Take pride in your work. Don’t be sloppy.
    Coding is a craft. Pay attention to the code you wrote and take some pride in it.
  9. Bugs happen.
    Nobody can plan for the future completely. Bugs will happen and that’s OK. Fix them when they arise.
  10. Have fun.
    Not every task will be fun, but strive to find fun in what you are coding. It will keep you sane and create a better product.
Monday, March 11th, 2013

One of the benefits of a startup is there is a very rapid code, test, deploy to production cycle. For a while at CollectedIt we had a manual (but documented!) process to deploy code to production. This worked well for a while, but it started to get tedious. Plus as anybody who has done any number of production installs knows the more steps a human does, the more chances for an error.

It was time for an automated way to deploy to our code. First we looked into using something like TFS or Jenkins. These tools however required installation somewhere. CollectedIt is very lean so we prefer not installing excess services in production, spinning up a new server in the cloud, or investing in a physical server just for an automated build tool (to be clear we would spend the resources if we deemed it necessary). Next our thoughts turned to writing something homegrown.

CollectedIt runs on Windows servers in the cloud. I had been exploring PowerShell on and off for a little while and seemed like the perfect solution for a quick and easy homegrown deployment script.

PowerShell comes with a very powerful feature called Remoting which is a technology that lets you use PowerShell to remotely control one (or many) remote computers. There were however 2 major obstacles that we needed to be overcome.

  1. Remoting has no out of the box way to copy files from server to server
  2. Remoting over the Internet is not the most straight forward of configurations

Not having a way out of the box to copy bits to a server with PowerShell is annoying. There are ways to copy bits over the Remoting (such passing over a byte[] parameter when doing a remote call that has the contents of the file). However there was no way that was robust enough, or performed well enough for our tastes. We went ahead and configured an FTP server as a file server. Since PowerShell is built on top of .NET we can use FTP with Microsoft.NET. The code samples in the using FTP with Microsoft.NET blog entry are all in C#, but they translate to PowerShell fairly easily. Here is an example of using FTP over explicit SSL to upload a file.


$ftp = [System.Net.WebRequest]::Create($ftpuri)
$ftp.Method = [System.Net.WebRequestMethods+Ftp]::UploadFile
$ftp.Credentials = New-Object System.Net.NetworkCredential($username, $password)
$ftp.EnableSsl = $true
$ftp.ContentLength = $filebytes.Length
$s = $ftp.GetRequestStream()
$s.Write($filebytes, 0, $filebytes.Length)
$s.Close()

Now that we have a way to copy bits we turned to the Remoting part. In order to use Remoting over the Internet first we had to enable Remoting on the server.

PS> Enable-PSRemoting -Force

If we were on a secured domain we would have no more steps. Getting Remoting to work securely over the Internet however, we are just getting started.

  1. Create a certificate for SSL. We used the makecert command that comes with the windows SDK.

    PS> makecert.exe -r -pe -n "CN=collectedit.com" `
    >> -eku 1.3.6.1.5.5.7.3.1 -ss my -sr localmachine -sky exchange`
    >> -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12
    >>

    This places the certificate inside the “Local Computer\Personal” certificate store.
  2. Get the thumbprint for the certificate.

    PS> ls Cert:\LocalMachine\My

    Directory: Microsoft.PowerShell.Security\Certificate::LocalMachine\My

    Thumbprint Subject
    ---------- -------
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX CN=collectedit.com

  3. Now we needed to create an HTTPS endpoint. We can use the winrm command to help with that. One note of warning: winrm was made for use at regular old cmd.exe . Using it with PowerShell we end up with a lot of backticks. If you run into frustration using winrm with PowerShell just switch to cmd.exe (it’s okay I won’t tell).

    PS> winrm create `
    winrm/config/Listener?Address=*+Transport=HTTPS`@`{Hostname=`"`collectedit.com`"`
    ;CertificateThumbprint=`"`XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX`"`}
  4. At this point the server is able to accept Remoting connections over the internet over SSL. To disable the HTTP remoting listener it’s as easy as finding the listener then removing it.

    PS> ls WSMan:\localhost\Listener

    WSManConfig: Microsoft.WSMan.Management\WSMan::localhost\Listener

    Type Keys Name
    ---- ---- ----
    Container {Address=*, Transport=HTTP} Listener_809701527
    Container {Address=*, Transport=HTTPS} Listener_1353925758

    PS> Remove-Item WSMan:\localhost\Listener\Listener_809701527

To use PowerShell Remoting over SSL there are additional parameters we needed to set when creating a remote session. The first is to tell PowerShell to use SSL and the second is to ignore the certificate authority since our certificate is self signed. This is as easy as

PS> $so = New-PSSessionOption -SkipCACheck # skip certificate authority check
PS> Enter-PSSession localhost -UseSSL -SessionOption $so # note the "UseSSL"

If there are any issues connecting first check firewall settings to allow port 5986 then check out this awesome blog post on Remote PSSession Over SSL finally if you still have issues use the about_Remote_Troubleshooting help page

With the two major hurdles solved we were confident that we could use PowerShell for our uses. Now we just needed to piece together code for

  1. Building of the project
  2. Zipping up the project
  3. Installing the project

We could have leveraged something like PSake to do our dirty work however coming from a background of .NET/bash/batch it was actually easier to build up our script ourselves, this may change in the future.

To build the project we just used MSBuild.

$msbuild = "C:\Windows\Microsoft.NET\Framework64\v4.0.30319\MSBuild.exe"
if (![System.IO.File]::Exists($msbuild)) {
# fall back to 32 bit version if we didn't find the 64 bit version
$msbuild = "C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe"
}
$buildcmd = "$msbuild $SolutionFile /t:Rebuild /p:Configuration=$Config"
Invoke-Expression $buildcmd

Zipping up the project we chose to use the zip method that (finally!) comes with .NET 4.5 (note: this required us to use PowerShell v3), System.IO.Compression.ZipFile.CreateFromDirectory in PowerShell it looks like this


[System.Reflection.Assembly]::LoadWithPartialName("System.IO.Compression.FileSystem")
[System.IO.Compression.ZipFile]::CreateFromDirectory($dir, $zip)

Installation of CollectedIt code was straight forward. From the beginning we created a setup.exe (that uses the excellent Insight Schema Installer from the Insight Micro-ORM project) to do an install of the SQL database that we could use on the command line (hence we could use it with PowerShell. The website only required the output from the build be copied to the production location. Again this was straight forward using PowerShell. We only had to Invoke a few commands remotely on the server to get this going. It looks something like this


Invoke-Command -SessionOption $so -ComputerName $Servers -Credential $remotecreds -ArgumentList ($Config) -ScriptBlock {
Param(
$Config
)

[System.Reflection.Assembly]::LoadWithPartialName("System.IO.Compression.FileSystem")
[System.IO.Compression.ZipFile]::ExtractToDirectory("$Config.zip", "c:\$Config")
Invoke-Expression "C:\$Config\Setup\setup.exe"
Copy-Item -Verbose -Recurse -Force "C:\$Config\Web" "D:\webroot"
}

That’s all the piece we had to put together for our ‘one click’ install. I should mention that our $remotecreds variable is populated with the Get-Credential cmdlet. For more of a streamlined process we are investigating securely storing the creds. Something along the lines of what is covered in this blog post on Importing and Exporting Credentials in PowerShell.

Hope this helps you build a streamlined process for deploying your own code with PowerShell. Drop me a line with any questions or comments with your own PowerShell deployment scripts.

Monday, August 20th, 2012

Note: The entry originally appeared on the tech blog at collectedit.com on 08/20/2012

One of the first large-ish projects I worked on at CollectedIt was creating a notification system. This would notify users of actions that other users took with their collection (commented, agree/disagree with item stats, etc). While I hope to blog more about the all the technical challenges that came about developing the system today I am going to concentrate on some of the text templating we do with the notifications.

image

CollectedIt is architected in such a way that database calls are few and far between whenever the current action is on a code path that could be called from a user facing interface (website, iphone app, etc). This architecture minimizes lag time and leaves us in a good position to scale out. However, as with most forms of architecture, there are trade offs. The biggest trade off for notification system was that the front-end knew enough about the action that was being performed that should generate a notification, but knew virtually no details about the objects that were part of the notifications.

A more concrete example:
Arthur is browsing medieval collections and stumbles upon an item in Tim’s “Enchanted Weapons” collection called “Staff that shoots flames”. This item is really neat to Arthur so he wishes to give Tim a kudos. Once Arthur clicks the kudos button this triggers a notification. At notification generation time all that is known is:

  1. Logged in user id
  2. Current collection id
  3. Current item id
  4. A kudos was given

Nothing is known about Arthur, or Tim, or “Enchanted Weapons” or “Staff that shoots flames”. It would be fairly trivial to the perform a DB query joining together 3 or 4 tables to get all the information needed, but we are in code that is executed as result of the kudos button being clicked so we want to get back to the user as soon as possible. Doing an extra DB query (particularly one that involved 4 tables) is not the quickest way to get that information.

What was decided was that the notification could be generated with tokens that could be replaced later down the line. The first thought was to just use Razor which would be cool however the Razor Parser is marked for internal use and I have been burnt by using internal methods before (not to say it’s never appropriate to use undocumented methods…but that’s another blog entry). Back to GoogleDuckDuckGo to see if anything is out there for me to do some sort of text templating with the CollectedIt object and some text.

I ran into T4 which at first look seem like it would work. Looking deeper though there is a compile time class that gets generated and the runtime just uses that generated object to do the processing. This won’t really work since the template is also generated at runtime.

A little more time searching I came up with nothing that would really do what I wanted. So I decided to experiment a little writing my own. Since I wanted to write this quick and there really is no reason to write a full blown text processor (although that would be fun) I needed to boil down what exactly it was I was trying to accomplish.

  • Flexible text replacement
  • Not much logic necessarily needed inside the template itself
  • Both template and replacement objects would be generated at runtime

First thing I decided to do was take a look at what I could get with C#’s dynamic type. I have used dynamic objects in the past to do things like Object.RuntimeProperty but that’s not exactly what I have here. I have Object and “RuntimeProperty” where “RuntimeProperty” is just a string. There may well be a way to use “RuntimeProperty” directly on a dynamic object, but I could not find one (if anybody knows of a way let me know in the comments). Instead I went down the reflection route since at runtime there is really nothing different between a dynamic object and a compiled object when inspecting objects with reflection.

Type dynamicType = o.GetType(); 
PropertyInfo p = dynamicType.GetProperty(property); 
object dynamicPropValue = p.GetValue(o, null); 
FieldInfo f = dynamicType.GetField(property); 
object dynamicFieldValue = f.GetValue(o);

Great! That takes care of runtime objects and their properties. What about the text template itself though. Well…I know regular expressions.

In order to not completely reinvent the wheel I picked the T4 syntax (and specifically only the subset of T4 that replaces the text template with a string: <#= Property #>). This is pretty easy to detect with a regex:

<#=\s*(?[a-zA-Z]\w*)\s*#>

With the reflection and the regex it gives just us all the tools that are need to satisfy the requirements we came up with. All that’s left is to package it up in a nice usable package. In order to figure out exactly how to package it up I looked at how exactly the text templating would be called.

Continuing with the Arthur/Tim example from above the code creating the kudos notification would like to generate the notification with an interface like

string notificationText = 
	"<#= Author > really likes your <#= Item #> in <#= Collection >";
string notification = template.ProcessTokens(new {
	Author = "Arthur", 
	Item = Staff that shoots flames",
	Collection = "Enchanted Weapons" 
});

This points to using an extension method. In fact that is exactly what we went with. The whole extension method is

public static string ProcessTokens(this string s, dynamic o)
{
	Type dynamicType = o.GetType();

	string composedString = s;
	MatchCollection tokens = _tokenRegex.Matches(s);
	foreach (Match token in tokens)
	{
		string property = token.Groups["prop"].Value;
				
		PropertyInfo p = dynamicType.GetProperty(property);
		if (p != null)
			composedString = composedString.Replace(token.ToString(), String.Format("{0}", p.GetValue(o, null)));
		else
		{
			FieldInfo f = dynamicType.GetField(property);
			if (f != null)
				composedString = composedString.Replace(token.ToString(), String.Format("{0}", f.GetValue(o)));
		}	
	}

	return composedString;
}
private static readonly Regex _tokenRegex = new Regex(@"<#=\s*(?<prop>[a-zA-Z]\w*)\s*#>");

That’s how we solved the problem of having a disjointed read/write object system. Feel free to use the code snippets above in your own projects to solve any sort of problem where you need runtime text and runtime objects to generate a string. Also make sure to drop by collectedit.com with questions, suggestions, or just some kudos.