Hacking SubText

Okay, I've had SubText up and running for a week and some now, so naturally it is time to tinker. In poking around, I'm not as happy with SubText's integration model as I was with DasBlog's. DasBlog exposed a number of "macros" that you could use to insert certain internal values into your own stuff. I kind of liked their model because once you registered your own code, you could reference your macros from DasBlog stuff as well (bear in mind that I didn't actually implement DasBlog, so my impression could be off).

That said, as long as you don't need to access things like entry links or other internal blog workings, it's pretty easy to modify a "Skin" in SubText to add new stuff. SubText reads a main ASP.Net Web User Control called PageTemplate.ascx that holds essentially the format for the other Web User Controls that a SubText site uses. Which subcontrols are used are pretty much determined by this PageTemplate control and the controls it references. This structure made it pretty easy for me to add a new embed control I wanted to play with.

The Technorati Embed

There are a number of sites that offer badges, embeds, and other tools intended for their users to simultaneously spruce-up their own sites and provide free advertising for the service involved. The Technorati search feature on the right is an example and was my first foray into modifying my blog template. With Technorati, the tag I need to use to expose a Technorati search link is this:
<script src="http://embed.technorati.com/embed/xw5vkfrkzb.js" type="text/javascript"></script>

To pop it on there, I just added that code right in the PageTemplate.ascx file where I thought it'd do some good (using a text editor). Straightforward, but inelegant.

The Great Games Experiment Embed

 For my next trick, I wanted to see how SubText would handle it if I mimicked their own structure to create a new control. To do this, I created a couple of things that are way more complicated than they needed to be. I mean, technically, all I needed to do to pop the user embed onto my site is copy the badge tag from their site in the same place I put the Technorati stuff. The tag isn't all that complicated:
<iframe src="http://www.greatgamesexperiment.com/greatgamesbadge/user/Jacob" height="140" width="204" scrolling="no" frameBorder="0" />

Doing so would produce this handy "badge":

But I'm a geek and I wanted to test some stuff out, so instead of simply pasting a bunch of html, I created a .Net project and two custom web user controls. One of the controls creates a user badge. The other control creates a game badge. The result of this project is that you can drag one of these controls onto an ASP.Net page and set the "UserName" property (or "Game" property for a game badge) and it'll take care of the rest. The active html would look like this:
<cc1:GreatGamesUserEmbed ID="GreatGamesUserEmbed1" runat="server" UserName="Jacob" />

Somehow, I managed to get that to work. As long as you've registered my assembly for the right tag prefix, you're golden.

But that wasn't quite enough for me, oh, no. Simply adding that control to PageTemplate would be kind of hokey and out of place, so I went one further. Following the pattern of the other controls, I created a GreatGames.ascx file under the Controls directory and referenced that instead. It's a simple file, but then, most of those Control files are. It creates a section that fits in much better on the right side bar and groups the Great Games badges together more naturally.

And wonder of wonders, it worked first time. Love when that happens.

Wrap-up

SubText doesn't have any documentation that I could find detailing these steps, so making these modifications required mucking about in the code to follow their flow during page delivery. If I get the gumption later, I'll re-write this as a more generic how-to in creating very basic SubText add-ins because the community needs something like this. Also, I saw a couple of interesting objects that look like you can extend them for more complicated things (like controls that access the SubText data, I'll bet). I'm sure that Phil Haack and others could point those out. In fact, they're nice enough folk on their dev email list I'd bet they'd be happy to answer questions if I took the time to ask.

Also, if you want those Great Games controls, download the source code. It's nothing fancy, but does the job. Or, if you just want the binary, download that instead. The source code project includes my GreatGames.ascx file as well so you can see an example of implementing them. Let me know if you do anything interesting with them.

14. December 2006 18:34 by Jacob | Comments (1) | Permalink

Creating a Domain Publisher Cert for a Small Internal Software Shop

The trend towards increasing security introduces a number of intricacies for medium-sized business software shops using Active Directory Domains. An internal domain with more than a dozen workstations can introduce issues that are old hat for larger shops, but way beyond anything a small business will have to deal with. I ran into one such issue recently when I decided it'd be a cool thing for one of my apps to actually run from the network.

The Problem

The first sign I had a problem was when a module that worked fine locally threw a "System.Security.SecurityException" when run from a network share. It told me that I had a problem at "System.Security.CodeAccessPermission.Demand()" when requesting "System.Security.Permissions.EnvironmentPermission". Since it worked fine while local, I figured I had a code trust problem and that I could probably get around it in the .Net Framework Configuration settings and push a domain policy that would update everyone.

I knew this because I had run into something similar once before (deploying a VSTO solution on the network).

Here's where it pays to be a real (i.e. lazy) developer: since I've run into this before, wouldn't it be nice to come up with a solution that will make it easier when I run into this in future? There are four ways to do this, I figure (well, that I could think of, there are probably more).

  1. Create some kind of scripting solution for deploying future projects that automatically creates policies (and propagates them) for each new assembly.
  2. Create a standard directory on the network that can be marked as "trusted" and deploy any trusted code into that directory.
  3. Use a "developer" certificate as your trusted publisher.
  4. Figure out how to get a publisher cert to use to sign your code and then propagate a rule certifying that publisher as trusted.

Some developers would go with number 1. Which makes me shudder. Anyone using the first option isn't someone I want to code with or after (barring some quirky deployment requirement that makes it more attractive, of course). Number 2 would probably be the most common solution because it's pretty simple and most medium-sized businesses are used to security compromises that use "special knowledge" and a lack of being an attractive target for security trade-offs. Number 3 would be a little more "upper-crust", mainly from people who had tried 4 and run into difficulties. And frankly, for most cases Number 3 is likely adequate. The problem is that using number 4 has a couple of not insignificant hurdles.

The Issues with Certificates

There are a couple of obstacles in your way if you want to produce a valid publisher certificate for use in signing code.

  • For a smaller internal shop, going the "official" route of contacting one of the major certificate stores (Thawte, Verisign, et. al.) is overkill with a price tag.
  • Setting up a private Certificate Authority isn't that hard, but unless you're running Windows 2003, Enterprise Edition, you cannot customize certificate templates.
  • The settings on the Code Signing template marks the private key as "unexportable".

That last is the most significant problem. You see, if you cannot export your private key, you cannot export to a "pfx" file (aka "PKCS #12"). You could export a .cer file (public key only) and then convert that to an spc using cert2spc.exe but that leaves you with a file that pretty much anyone can use to sign code. There's a reason Visual Studio Help warns that cert2spc.exe is for testing only.

If I lost you in all the security acronyms, don't worry about it. The important thing to note is that a) non-pfx files don't need a password to use in signing assemblies and b) there's no easy way to create a non-developer created pfx file signed by your organization's CA.

How to Get Your CA to Issue an Exportable Certificate

There is, however, a loophole you can exploit to con your CA into giving you a Code Signing certificate that you can export into a valid .pfx file. I'll skip the setup stuff on the CA. It is important to make sure that your CA makes the Code Signing template available (it isn't by default). Making it available is pretty straightforward, so I won't go into that here.

The first thing you'll need to do is use makecert.exe to create both a private and public key. A basic commandline to do so would be:

makecert -n "CN=Text" -pe -b 12/01/2006 -e 12/01/2012 -sv c:\test.pvk c:\test.cer

You can hit the help file for other fields you might want to set (or use the -? and -! switches to get two lists of available options). This command will pop up a GUI prompt for your private key password. Note that I typoed "CN=Text". While I meant to make that "Test", it turns out to be a good way to illustrate what that value is so I decided to keep it in the following examples. Also note that "-pe" is what makes the private key exportable. After running this command, you'll have two files in your root directory. The pvk is the private key file and the .cer is the public key.

Next you use a Platform SDK tool called pvk2pfx.exe. This wasn't in my regular path so I had to do a search to find it. I'm guessing that most development machines will have it already. If not, it's available from Microsoft. Here's the command I needed:

"C:\Program Files\Microsoft SDKs\Windows\v6.0\Bin\pvk2pfx.exe" -pvk c:\test.pvk -spc c:\test.cer -pfx c:\test.pfx

Like makecert, this command will give you a password dialog for the private key. Note that even though the command switch is "spc", it'll accept a .cer file just fine. Now, you might think that we're done because we have a valid pfx file. The problem is that this pfx file is derived from a CA of "Root Agency". In order to get this into your internal CA, you're going to need to use your certificate manager. You'll likely need to Run certmgr.msc to get to it. Once there, head to the Personal|Certificates node. This will let you play with certificates on your current workstation.

Right-clicking on "Personal" gives you an "Import" option. Follow the prompts to pull your certificate in. It'll prompt you for the private key password. Once you do this, you'll see your new private key and probably an auto-imported "Root Agency".

Here's where we find the handy loophole. While the default value for allowing private key exporting on the Code Signing template is false, you can use your handy new certificate to request a duplicate. Right-click that key and select "Request Certificate with Same Key". You can also use "Renew Certificate with Same Key". The functional difference seems to me to be that Renew keeps your password while Request provides an empty one (which is nervous-making, but rectifiable using a number of different tools including Visual Studio once the certificate is exported).

In the Wizard that follows, make sure you select the Code Signing template. What you'll receive back is a certificate from your CA for code signing that includes a private key that is marked exportable. At this point, I delete both the "Root Agency" and "Text" certs in order to avoid future confusion.

Use the Right-click|Export command to export this certificate to a pfx file. The pfx file has everything you need to be able to create a .Net Framework code policy using "publisher" as the distinguishing characteristic to mark your code trusted. Once that policy is propagated to all the domain workstations, you're good to go. You'll need to use the resulting pfx file to sign the assemblies (once they're ready for release), but you knew that already :).

A Final Note

After I had a valid certificate for signing, I actually ended up using .Net's ClickOnce technology to deploy the project. I still needed a certificate to create a strong-named assembly, but a weaker or temp certificate would have been adequate for internal deployment. The more robust certificate will let me eliminate a security prompt the first time a user runs the application, though. Since that prompt has a big red exclamation point in it, I'm just as happy to eliminate it.

4. December 2006 22:33 by Jacob | Comments (1) | Permalink

DataSets and Business Logic

Whoa, that was fast. Udi Dahan responded to my post on DataSets and DbConcurrencyException. Cool. Also cool: he has a good point. Two good points, really.

Doing OLTP Better Out of the Box

I'll take his last point first because it's pure conjecture. Why don't DataSets handle OLTP-type functions better? My first two suggestions would, indeed, be better if they were included in the original code generated by the ADO.NET dataset designer. I wish that they were. Frankly, the statements already generated by the "optimistic" updates option are quite complex as-is and adding an additional "OR" condition per field wouldn't really be adding that much in either complexity or readability (which are both beyond repair anyway) and would add to reliability and reduce error conditions.

My guess is that it has to do with my favorite gripe about datasets in general: nobody knows quite what they are for. I suspect that this applies as much to the folks in Redmond as anywhere else. Datasets are obviously a stab at an abstraction layer from the server data and make it easier to do asynchronous database transactions as a regular (i.e. non-database, non-enterprise guru) developer. But that doesn't really answer the question of what they are useful for and when you should use them.

DataSets are, essentially, the red-headed step child of the .NET framework. They get enough care and feeding to survive, but hardly the loving care they'd need to thrive. And really, I think that LINQ pretty much guarantees their eventual demise. Particularly with some of the coolness that is DLINQ.

Datasets Alone Make Lousy Business Objects

As much as I am a fan of DataSets in general, you have to admit that they aren't a great answer in the whole business layer architecture domain.

I mean, you can (if you are sufficiently clever) implement some rudimentary data validation by setting facets on your table fields (not that most people do this--or even know you can). You can encode things like min/max, field length, and other relatively straight-forward data purity limitations. Anything beyond this, however, (like, say, when orders in Japan have to have an accompanying telephone number to be valid) would involve either some nasty derived class structures (if you even can--are strongly-typed DataTables inheritable? I've never tried. It'd be a mess to do so, I think), or wrapping the poor things in real classes.

One solution to this is to use web services as your business layer and toss DataSets back and forth as the "state" of a broader, mostly-conceptual object. This is something of a natural fit because DataSet objects serialize easily as XML (and do so much better--i.e. less buggy--in .NET 2.0). This de-couples methods from data, so isn't terribly OO. It can work in an environment where complex rules must work in widely disparate environments (like a call center application and a self-serve web sales application) when development speed is a concern (as in, say, a high-growth environment).

I think this leads to the kind of complexity Udi says he has seen with datasets. The main faultline is that what methods to call (and where to find them) are in design documents or a developer's head. This can easily lead to a nasty duplication of methods and chaos--problems that functionally don't exist in a stronger object paradigm.

That Said...

Here is where I stick my neck out and reveal my personal preferences (and let all the "real" developers write me off as obviously deluded): although DataSets make admittedly lousy business objects, most non-enterprise level projects just don't need the overhead that a true object data layer represents. For me, it's a case of serious YAGNI.

Take any number of .NET open source software projects I've played with: not one uses DataSets, yet not one needs all the complexity of their custom created classes, either. They aren't doing complex data validation and their CRUD operations are less robust than those produced automatically from the dataset designer. All at a higher expense of resources to produce.

Or take my current place of gainful employ. We have five ASP.NET applications that all have an extremely complex n-tier architecture--all implemented separately in each web application (and nowhere else--they're not even in a separate library). Each of the business objects has a bunch of properties implemented that are straight get/set from an internal field. And that is all they are. Oh, there's a couple of "get" routines that populate the object for different contexts using a separate Data Access Layer object. And an update routine that does the same. And a create... you get the point. It's three layers of abstraction that don't do anything. I shudder to think how much longer all that complexity took to create when a strongly-typed DataSet would have done a much better job and taken a fraction of the time. It makes me want to call the development police to report ORM abuse.

Which is to Say

Don't let all that detract from Udi's point, though. He's right that for seriously complex enterprise-level operations, you can't really get around the fact that you need good architecture for which datasets will likely be inadequate. Relying wholly on DataSets in that case will get you into trouble.

I personally think that you could get away with datasets being the communication objects between web services in most cases even so, but I also realize that there are serious weaknesses in this approach. It works best if the application is confined to a single enterprise domain (like order processing or warehouse inventory management). Once you cross domains with your objects, you incur some serious side-effects, not least of which is that the meaning of your objects (and the operations you want to perform on them) can change with context (sometimes without you knowing it--want an exercise in what I mean? Ask your head of marketing and your head of finance what the definition of a "sale" is--then go ask your board of directors).

So yeah, DataSets aren't always the answer. I'd just prefer if more developers would make that judgement from a standpoint of knowing what DataSets are and what they can do. Too often, their detractors are operating more from faith than from knowledge.*

*Not that this is the case for Udi. For all he has admitted that he isn't personally terribly familiar with datasets, his examples are pretty good at delineating their pressure points and that tends to indicate that he's speaking from some experience with their use in the wild.

 

21. November 2006 19:23 by Jacob | Comments (0) | Permalink

Blogging Software Update

Well, I still haven't chosen what blog software I want to use in a new home. So many to chose from and none are a perfect fit. The comments left by Scott Hanselman, Phil Haack, and Dave Burke were good ones and point out the connectedness of the blogosphere (is that word accepted enough to forgo the quotes now?)

Since I gave DasBlog such short shrift in my original post, I spent some quality time with it. DasBlog has some really strong points in its favor. For one, it has a very active developer community, though that isn't as apparent as it is with Subtext. DasBlog 1.9 was released today and I'm even mentioned in the release announcement (it's always nice to see your name in print--even virtual print).

I am leaning towards DasBlog now. Two bonuses of using it: no tricky database settings (it uses XML files stored in the web file system), and a really flexible plug-in system (they call them macros, but they're actually compiled .NET assemblies with a standard interface and config-file usage).

Not that I've decided to use it. Or not. I'm not generally this indecisive. I think...

 

Technorati tags: , , , ,
22. September 2006 18:38 by Jacob | Comments (0) | Permalink

Blogging Software

It's always the little things that chafe, you know? I mean, for the most part I'm happy with Windows Live Spaces, but there are little things that bug me from time to time so I find myself getting antsy. On the plus side, I like the modules and the freedom to place them where I want them. I like the book list (but I'd like it better if I could a) put it in the order I want and b) could rate the books listed there). I like my links.

But I find that I keep wanting things I get with more of a hosting service. I want more control. I'm a tinkerer, what can I say? So I've been looking at blogging software a bit. Since I have an account at Go Daddy, I'd like to find something that I can use there, and I'd like very much for it to be created in ASP.NET. It'd be nice if it was open source as well, if only so I can tweak and explore at my own capricious whim. Also, as long as I'm putting together a wish-list, how about if it could be programmed in ASP.NET 2.0? I mean, master pages and application themes with skinning support is, uh, lacking an appropriate white-guy expression, "da shiz".

The front-runners in the .NET space for blogging software appear to be SubText and DasBlog--both are branches from their progenitor .Text which appears to be defunct. Unfortunately, since .Text was originally ASP.NET 1.1, both SubText and DasBlog are rooted in that technology. They both support custom themes, but they had to hack ASP.NET 1.1 to do so--mostly with custom controls.

DasBlog

I was originally drawn more to DasBlog because I've become a fan of Scott Hanselman--first from his podcasts, Hanselminutes, but later to his blog (which actually uses DasBlog, kudos for eating the dinner you've made). He's one of those over-producers who seems to have his hand in on fifteen million things at a time and is able to simultaneously talk about it all.

However, DasBlog's main website is frequently down, and there doesn't appear to be a lot of action in the form of improvements, releases, news, or updates. Which makes me wonder if it isn't a dying product, suffering from Scott's hyper interests.

SubText

SubText is developed by another blogger I like, Phil Haack. Phil also lives in the house he built so you'll hear his experiences with SubText on his blog sometimes. I was intrigued to see him announce that SubText 1.9 has been released recently. SubText 1.9 is a project conversion to ASP.NET 2.0 so I was reasonably excited to see its release.

So excited that I went ahead with an install. It was a painful experience. Not because SubText isn't a pretty good product, but an install on a cheap Go Daddy account is a step or three down from the expected configuration. The main blockage is that while Go Daddy gives you dbo (owner) privileges on your database, you have highly restricted rights on the master database. Unfortunately, the install assumes that you can use a select on a master location to see if a table already exists and that select blew chunks.

One advantage of open source, though, is that someone reasonably competent (or simply lucky as is more likely my case) can dig through the install process and see what needs to happen. Since I could see that the table didn't exist, I ran the script manually. Unfortunately, since the install tracks installation stage in memory instead of checking the database, I ended up having to do the entire install manually instead of just that first step. Ouch.

So it was a hack, but it appears to have succeeded. I'm not entirely happy with the implementation, though, because while SubText is now ASP.NET 2.0, it doesn't actually use the new  master page and theme features. Those may be implemented in future, but since that's a relatively fundamental alteration, it would break a lot of things--particularly a lot of user-created theme files. Creating work for yourself is one thing, but making your users go back and re-do all the custom themes they so generously contributed to your project is going to be a hard sell when there isn't a well established benefit. Indeed, the roadmap implies that if it happens, it's at least two releases away (and frankly, 2.1 looks a little daunting to me and should probably be cut down some if they want to take less than a year with it).

SUB

One of Phil's more endearing traits is a kind of perverse generosity that led him to advertise for a competitor (while throwing down the gauntlet of course). Since I'm not entirely happy with SubText, I thought that I'd give SUB (Single-User Blog) a look-see.

Frankly, I like SUB. It's pretty simple and since it's done from scratch in ASP.NET 2.0, it has all the goodies I've been looking for and some I had thought of but didn't figure I could get.

Unfortunately, SUB has two draw-backs that make me hesitate. The first is that it is, as its title clearly states, single-user. One thing I came to like about SubText is that you can support more than one blog from an installation. Indeed, I was able to point both domains I had registered with Go Daddy to the same location, have them run the exact same files, and yet have each site perfectly individualized (it does this by checking the incoming URL address to know which blog settings to use).

The second draw-back has to do with my personal tastes in programming, so I'm going to leave that for another post.

So what?

I've no idea what I'll end up doing with my blog(s). Since one of them is a new one for Cawti, I'll have to make some relatively irrevocable choices really soon here. I'm feeling all Frankensteiny, thought, so I may just mash pieces of lots of different things together so I can terrorize intolerant villagers. Yeah, that sounds fun...

 

7. September 2006 03:44 by Jacob | Comments (0) | Permalink

Experienced Developers

The following applies mainly to in-house business software development. It might or might not apply to ISV or other product development houses. I think that there's room for broad application, but you can hit my list of software blogs if you want some quality sources for more generalized ISV or product development exploration.
Back when I was looking to hire developers a couple years ago, I knew some programmers that I wanted on my team. I had worked with them before and knew what they were capable of. Unfortunately, they didn't have much experience with .NET--our platform at the time. I made the case then (to my boss and HR) that a good developer is a good developer and the language and framework are more or less immaterial for people with a broad set of experience. The syntax and capabilities of the language and environment are teachable and a developer's broad experience helps them pick it up quickly.
I was fortunate enough to bring one of those developers (who is currently working in indie games) onto my team. The experience was great. I was right that he was able to come up to speed and pick up the nuances of the environment very quickly. Further, he was able to solve some issues outside of the framework in ways that saved us a lot of time (using Python for text file manipulation for example).
So does this mean that you can toss out environment-specific skill sets when evaluating who to hire? Not in my opinion. You see at that time we already had a core group that was intimately familiar with the capabilities of .NET. Domain-specific knowledge is crucial in producing software applications that actually do what they are supposed to do in a reasonable timeframe. People who have already digested the learning curve are invaluable in blazing the way for others who haven't yet done so. The reason specialists are crucial is simple--somebody who has already walked the trail is able to point out the pitfalls and shortcuts and help the whole team move quickly. The new guy was able to come up to speed so fast because he had people he could ask when he had questions and who could explain why things worked the way they did.
I came out of this experience with respect for having a diverse programming team. You have to have at least some specialists who know the environment inside and out. These specialists are going to be the limit of what you can accomplish in a lot of ways so they need certain skills--chief among them, the ability to communicate with other developers (because they'll do more of that than they probably want to and because a specialist who can't communicate isn't able to influence the project nearly as well as one who can).
But I personally don't think it is a good idea to have a team or department composed solely of specialists. Any software development that requires more than two people is going to run into problems in a wide array of domains and having a broad skill set to draw on can be very useful. What proportion of specialists to generally kick-ass developers do you need? Well, that's a good question. I welcome ideas in the comments, but I suspect this is one of those things that is highly contextual and thus at the heart of the "art" of software development. I'm thinking that a narrowly defined ISV benefits with a higher ratio of specialists than, say, a multi-level marketing company looking to manage their necessarily custom business needs.
25. July 2006 19:30 by Jacob | Comments (0) | Permalink

Calendar

<<  July 2017  >>
MoTuWeThFrSaSu
262728293012
3456789
10111213141516
17181920212223
24252627282930
31123456

View posts in large calendar