Head Games

Good friend and Indie game developer Jay Barnson has just taken game development in a new direction: Developing in Public. He sounds a little nervous about it, which makes sense. Unlike those who have previously attempted this feat, though, I think Jay stands a good chance of pulling it off, and with good style.

There are two things that are likely to make this interesting. First, Jay's about ready to release his second indie game, Apocalypse Cow—So we're likely to see this through all the way to a completed game. Second, he's an honest and engaging writer.

It's that second that makes me eager to eavesdrop on the development of his new game. I know he'll present it warts and all and that it'll be entertaining along the way.

Technorati tags: , , , ,
20. April 2007 15:31 by Jacob | Comments (0) | Permalink

Are We There Yet?

"So when will you be done with this development project?"

I don't know about you, but I hate this question. There simply is no good answer for it. It seems like such a simple question with a simple DateTime valued answer. One of these days I swear I'll answer with, "Oh, I'll be done next Tuesday at 2:34pm." just to see what happens.

And seriously, businesses hate that we have such difficulty answering the question. It seems perfectly reasonable for them to want to know when they can plan to have the new processes that they know they desperately need. Developers demand high salaries and are ostensibly professionals, they should be able to give a professional answer, right?

The Road is Well Paved

The thing is, software development is a lot harder than people expect it to be--and this includes software professionals. Even simple software projects can run afoul of hidden complexities that can destroy well meaning estimates and make everyone unhappy. And no matter how you hedge your answers, people simply don't remember all your caveats, maybes, and what ifs that you use to indicate uncertainty.

The end result is that developers seldom make their ship-by dates and companies become disillusioned and impatient with all software development. That's not helpful for anybody, but it's pretty much the rule anymore.

And the fact of the matter is that the vast majority of developers (and development managers) never learn how to answer the estimate question. They'll move from company to company, repeating the cycle of hope, suspicion, and disappointment over and over again. Which works well enough for the developers in the boom times when the demand for development is so high that mildly talented house plants can get hired as developers.

So a lot of people are making the same mistakes over and over. Businesses can be excused for assuming that this is simply the way things are and feel confident in their distrust of software professionals. They've been there, done that, bought the t-shirt.

Paying the Toll

This environment causes developers who care about these kinds of things a lot of heartburn. Everyone pays for the ongoing cycle of disillusionment. I believe that this is what really prompts posts like the recent ones from Ted Neward talking about professional ethics. And I've been known to throw my own hat into the ring as well.

We get tired of paying for the sins of those who have gone before. And I'm not referring to the messed up legacy code we stumble into, either. Frankly, messed up code is the least of your problems coming into a situation with a client who has been burned by previous developer promises. Companies that have had deadline after deadline missed have a degree of mistrust that is very hard to overcome.

We pay for this distrust in a hundred different ways. The thing is, trust is a paying commodity in business. Working with partners you trust means a whole lot of overhead you can simply skip. An analogy: if I trust a plumber to fix my sink quickly and professionally, I can go get a burger and leave him to it. It's only when I don't have that trust that I have to pay the additional overhead of having someone I do trust watching to make sure he's not napping under the sink.

Want to see a business manager go into a dreamy fantasy? Ask them what it'd be like to be able to trust their software developers (in house or not). The more experience they've had with developers the more intense the fantasy.

The Rubber Meets the Road

We have a couple of areas of friction in businesses that exacerbate this situation. The main disconnect with business managers is that we have borrowed terminology and tools from other disciplines without understanding that our processes are fundamentally different. It's tricky because the temptation to use manufacturing terminology is immense. After all, we are creating a product of sorts. This makes so much sense on an intuitive level that it's hard to realize that the comparison is misleading and potentially dangerous.

I wish we could retrain everyone to make analogies to other business specialties. Scientific research or law come to mind as potentially useful analogies because both are similarly plagued by the impact of unique situations, changing ground rules, and unforeseen complexities. It would be interesting to investigate how managing software development like a patent application or drug research would change how we look at the problems involved. We might have stumbled onto iterative cycles and responding to altered requirements a whole lot sooner, for example.

Paying Attention

The real problem, though, is that most developers (and even most development managers) don't take the time to learn about common friction points. Nor do they take the time to build relations with their business counterparts so that you have some political capital (aka trust) to use when it is needed. It's easy to forget that much of the progress in software development practices are pretty recent in terms of business processes. After all, business managers don't move at the speed of light and changes tend to take time to penetrate those layers.

Which means that a whole lot of industry advances aren't even theory yet in the board room.

And the fact of the matter is that you cannot expect a business manager to understand what makes Agile practices work. Or the reason that strong unit testing saves time over the long run even though it takes more time up front. Learning to communicate at a level that is sufficiently detailed for smart business decisions without getting bogged down into the jargon inherent in any specialty is an invaluable skill, and one best learned earlier than later. That means thoroughly understanding those theories yourself--not just on the surface or in buzzword compliance. It also means learning to communicate that understanding from orbit, 30,000 ft, 5,000 ft, and right on the ground. This is hard to do. It takes practice. It also takes exposure to business manager types. I'm not sure which is harder...

Something to think about, though: not learning this skill leaves you at the mercy of those who do learn it.

My point, though, is that it takes both. You have to learn your profession so thoroughly that you can deconstruct its "best practices" ("design patterns", whatever) and rebuild them from basic principles on the fly. AND you have to learn to communicate that understanding comfortably to people of varying familiarity with software development in a business environment.

That's what it takes to be a true professional. It's easy to let those two skills fall out of balance. Individuals who understand both are invaluable to a company. Also rare. Companies who discover someone capable of both are often surprised at how much smoother things run with that person placed where they can do the most good--a point Jeff Atwood's latest on becoming a better programmer drives home.

So I don't have a formula for quick and accurate estimates. Just a lot of hard work. Still, here's a tip for free: anyone asking for a firm delivery date is inherently assuming BDUF. Once you know that, you know where to start your answer.

29. January 2007 18:19 by Jacob | Comments (4) | Permalink

Creating a Domain Publisher Cert for a Small Internal Software Shop

The trend towards increasing security introduces a number of intricacies for medium-sized business software shops using Active Directory Domains. An internal domain with more than a dozen workstations can introduce issues that are old hat for larger shops, but way beyond anything a small business will have to deal with. I ran into one such issue recently when I decided it'd be a cool thing for one of my apps to actually run from the network.

The Problem

The first sign I had a problem was when a module that worked fine locally threw a "System.Security.SecurityException" when run from a network share. It told me that I had a problem at "System.Security.CodeAccessPermission.Demand()" when requesting "System.Security.Permissions.EnvironmentPermission". Since it worked fine while local, I figured I had a code trust problem and that I could probably get around it in the .Net Framework Configuration settings and push a domain policy that would update everyone.

I knew this because I had run into something similar once before (deploying a VSTO solution on the network).

Here's where it pays to be a real (i.e. lazy) developer: since I've run into this before, wouldn't it be nice to come up with a solution that will make it easier when I run into this in future? There are four ways to do this, I figure (well, that I could think of, there are probably more).

  1. Create some kind of scripting solution for deploying future projects that automatically creates policies (and propagates them) for each new assembly.
  2. Create a standard directory on the network that can be marked as "trusted" and deploy any trusted code into that directory.
  3. Use a "developer" certificate as your trusted publisher.
  4. Figure out how to get a publisher cert to use to sign your code and then propagate a rule certifying that publisher as trusted.

Some developers would go with number 1. Which makes me shudder. Anyone using the first option isn't someone I want to code with or after (barring some quirky deployment requirement that makes it more attractive, of course). Number 2 would probably be the most common solution because it's pretty simple and most medium-sized businesses are used to security compromises that use "special knowledge" and a lack of being an attractive target for security trade-offs. Number 3 would be a little more "upper-crust", mainly from people who had tried 4 and run into difficulties. And frankly, for most cases Number 3 is likely adequate. The problem is that using number 4 has a couple of not insignificant hurdles.

The Issues with Certificates

There are a couple of obstacles in your way if you want to produce a valid publisher certificate for use in signing code.

  • For a smaller internal shop, going the "official" route of contacting one of the major certificate stores (Thawte, Verisign, et. al.) is overkill with a price tag.
  • Setting up a private Certificate Authority isn't that hard, but unless you're running Windows 2003, Enterprise Edition, you cannot customize certificate templates.
  • The settings on the Code Signing template marks the private key as "unexportable".

That last is the most significant problem. You see, if you cannot export your private key, you cannot export to a "pfx" file (aka "PKCS #12"). You could export a .cer file (public key only) and then convert that to an spc using cert2spc.exe but that leaves you with a file that pretty much anyone can use to sign code. There's a reason Visual Studio Help warns that cert2spc.exe is for testing only.

If I lost you in all the security acronyms, don't worry about it. The important thing to note is that a) non-pfx files don't need a password to use in signing assemblies and b) there's no easy way to create a non-developer created pfx file signed by your organization's CA.

How to Get Your CA to Issue an Exportable Certificate

There is, however, a loophole you can exploit to con your CA into giving you a Code Signing certificate that you can export into a valid .pfx file. I'll skip the setup stuff on the CA. It is important to make sure that your CA makes the Code Signing template available (it isn't by default). Making it available is pretty straightforward, so I won't go into that here.

The first thing you'll need to do is use makecert.exe to create both a private and public key. A basic commandline to do so would be:

makecert -n "CN=Text" -pe -b 12/01/2006 -e 12/01/2012 -sv c:\test.pvk c:\test.cer

You can hit the help file for other fields you might want to set (or use the -? and -! switches to get two lists of available options). This command will pop up a GUI prompt for your private key password. Note that I typoed "CN=Text". While I meant to make that "Test", it turns out to be a good way to illustrate what that value is so I decided to keep it in the following examples. Also note that "-pe" is what makes the private key exportable. After running this command, you'll have two files in your root directory. The pvk is the private key file and the .cer is the public key.

Next you use a Platform SDK tool called pvk2pfx.exe. This wasn't in my regular path so I had to do a search to find it. I'm guessing that most development machines will have it already. If not, it's available from Microsoft. Here's the command I needed:

"C:\Program Files\Microsoft SDKs\Windows\v6.0\Bin\pvk2pfx.exe" -pvk c:\test.pvk -spc c:\test.cer -pfx c:\test.pfx

Like makecert, this command will give you a password dialog for the private key. Note that even though the command switch is "spc", it'll accept a .cer file just fine. Now, you might think that we're done because we have a valid pfx file. The problem is that this pfx file is derived from a CA of "Root Agency". In order to get this into your internal CA, you're going to need to use your certificate manager. You'll likely need to Run certmgr.msc to get to it. Once there, head to the Personal|Certificates node. This will let you play with certificates on your current workstation.

Right-clicking on "Personal" gives you an "Import" option. Follow the prompts to pull your certificate in. It'll prompt you for the private key password. Once you do this, you'll see your new private key and probably an auto-imported "Root Agency".

Here's where we find the handy loophole. While the default value for allowing private key exporting on the Code Signing template is false, you can use your handy new certificate to request a duplicate. Right-click that key and select "Request Certificate with Same Key". You can also use "Renew Certificate with Same Key". The functional difference seems to me to be that Renew keeps your password while Request provides an empty one (which is nervous-making, but rectifiable using a number of different tools including Visual Studio once the certificate is exported).

In the Wizard that follows, make sure you select the Code Signing template. What you'll receive back is a certificate from your CA for code signing that includes a private key that is marked exportable. At this point, I delete both the "Root Agency" and "Text" certs in order to avoid future confusion.

Use the Right-click|Export command to export this certificate to a pfx file. The pfx file has everything you need to be able to create a .Net Framework code policy using "publisher" as the distinguishing characteristic to mark your code trusted. Once that policy is propagated to all the domain workstations, you're good to go. You'll need to use the resulting pfx file to sign the assemblies (once they're ready for release), but you knew that already :).

A Final Note

After I had a valid certificate for signing, I actually ended up using .Net's ClickOnce technology to deploy the project. I still needed a certificate to create a strong-named assembly, but a weaker or temp certificate would have been adequate for internal deployment. The more robust certificate will let me eliminate a security prompt the first time a user runs the application, though. Since that prompt has a big red exclamation point in it, I'm just as happy to eliminate it.

4. December 2006 22:33 by Jacob | Comments (1) | Permalink

DataSets Suck

First off, a correction. In my recent post on OLTP using DataSets, I gave four methods that would allow you to handle non-conflicting updates of a row using the same initial data state. In reviewing a tangent later I realized that method 2 wouldn't work. Here's why:

The auto-generated Update for a datatable does a "SET" operation on all the fields of the row and depends on the WHERE clause to make sure that it isn't going to change something that wasn't meant to be changed. Which means that option 2 would not only not be a good OLTP solution, it'd overwrite prior updates without any notice. Much better to simply throw a DbConcurrencyException and let the application handle the discrepancy (or not).

Which also answers Udi's question of why it doesn't do that out of the box. It'd be nice if the defaults were implemented with a more robust OLTP scenario in mind, though. It'd be pretty complex, but that's because OLTP has inherent complexities. You would either have to generate the Update statement on the fly (thus breaking the new ADO.NET 2.0 batch option on the adapters) or put the logic at the field level (using an SQL "CASE" statement). I'm not sure how efficient CASE is on the server, but that could potentially fix my 2nd option.

But this brings me to my second and broader point again: the disdain that "real" programmers have for datasets. This was refreshed for me recently on a blog post by Karl Sequin at Code Better. I liked that post a lot (about using a coding test when evaluating potential hires) until I got to the bit about tell-tell signs he would look for. Right at the top?

Datasets and SqlDataSource are very bad

He has since amended that so:

Datasets and SqlDataSource are very bad (update: the dataset thing didn't go over too well in the comments ;) )

and added in the comments:

Sorry everyone...I've always had a thing against datasets...

He's not alone here. It's a common feature of highly technical programmers to hold datasets in contempt. Which would be fair enough if they were willing to give reasons or support for the position. If I felt that such statements came from an informed foundation, there wouldn't be much to quibble about. Unfortunately, too often this is simply not the case.

On those rare occasions when I can get one of these gurus to expound a bit, this attitude generally devolves back to a couple of bad experiences where datasets were used poorly or shoved into a situation where they didn't belong. Indeed, Karl goes on to give the kind of thing he doesn't want to see and I have to agree that he has a point. But while his example uses a dataset, it isn't the source of the problem. The problem is actually in his second point after datasets:

Data access shouldn't be in the aspx or codebehind

Since he's looking for strong enterprise-level coding habits, he's right that it'd be better encapsulated in its own class, and better still in its own library.

Again, it isn't the dataset he actually has a quibble with. He's just perpetuating a prejudice when he reflexively includes them as a first strike. To his credit, he's willing to own up to the prejudice. Unfortunately, he does so in a way that indicates that it is a prejudice he has no plans to explore or evaluate. That's what I hate about the whole anti-dataset vibe in the guru set. Particularly since these tend to be people who are proud of their rationality and expect others to listen to them when they expound on technical topics.

 

Technorati tags: , , ,
23. November 2006 16:47 by Jacob | Comments (0) | Permalink

DataSets and Business Logic

Whoa, that was fast. Udi Dahan responded to my post on DataSets and DbConcurrencyException. Cool. Also cool: he has a good point. Two good points, really.

Doing OLTP Better Out of the Box

I'll take his last point first because it's pure conjecture. Why don't DataSets handle OLTP-type functions better? My first two suggestions would, indeed, be better if they were included in the original code generated by the ADO.NET dataset designer. I wish that they were. Frankly, the statements already generated by the "optimistic" updates option are quite complex as-is and adding an additional "OR" condition per field wouldn't really be adding that much in either complexity or readability (which are both beyond repair anyway) and would add to reliability and reduce error conditions.

My guess is that it has to do with my favorite gripe about datasets in general: nobody knows quite what they are for. I suspect that this applies as much to the folks in Redmond as anywhere else. Datasets are obviously a stab at an abstraction layer from the server data and make it easier to do asynchronous database transactions as a regular (i.e. non-database, non-enterprise guru) developer. But that doesn't really answer the question of what they are useful for and when you should use them.

DataSets are, essentially, the red-headed step child of the .NET framework. They get enough care and feeding to survive, but hardly the loving care they'd need to thrive. And really, I think that LINQ pretty much guarantees their eventual demise. Particularly with some of the coolness that is DLINQ.

Datasets Alone Make Lousy Business Objects

As much as I am a fan of DataSets in general, you have to admit that they aren't a great answer in the whole business layer architecture domain.

I mean, you can (if you are sufficiently clever) implement some rudimentary data validation by setting facets on your table fields (not that most people do this--or even know you can). You can encode things like min/max, field length, and other relatively straight-forward data purity limitations. Anything beyond this, however, (like, say, when orders in Japan have to have an accompanying telephone number to be valid) would involve either some nasty derived class structures (if you even can--are strongly-typed DataTables inheritable? I've never tried. It'd be a mess to do so, I think), or wrapping the poor things in real classes.

One solution to this is to use web services as your business layer and toss DataSets back and forth as the "state" of a broader, mostly-conceptual object. This is something of a natural fit because DataSet objects serialize easily as XML (and do so much better--i.e. less buggy--in .NET 2.0). This de-couples methods from data, so isn't terribly OO. It can work in an environment where complex rules must work in widely disparate environments (like a call center application and a self-serve web sales application) when development speed is a concern (as in, say, a high-growth environment).

I think this leads to the kind of complexity Udi says he has seen with datasets. The main faultline is that what methods to call (and where to find them) are in design documents or a developer's head. This can easily lead to a nasty duplication of methods and chaos--problems that functionally don't exist in a stronger object paradigm.

That Said...

Here is where I stick my neck out and reveal my personal preferences (and let all the "real" developers write me off as obviously deluded): although DataSets make admittedly lousy business objects, most non-enterprise level projects just don't need the overhead that a true object data layer represents. For me, it's a case of serious YAGNI.

Take any number of .NET open source software projects I've played with: not one uses DataSets, yet not one needs all the complexity of their custom created classes, either. They aren't doing complex data validation and their CRUD operations are less robust than those produced automatically from the dataset designer. All at a higher expense of resources to produce.

Or take my current place of gainful employ. We have five ASP.NET applications that all have an extremely complex n-tier architecture--all implemented separately in each web application (and nowhere else--they're not even in a separate library). Each of the business objects has a bunch of properties implemented that are straight get/set from an internal field. And that is all they are. Oh, there's a couple of "get" routines that populate the object for different contexts using a separate Data Access Layer object. And an update routine that does the same. And a create... you get the point. It's three layers of abstraction that don't do anything. I shudder to think how much longer all that complexity took to create when a strongly-typed DataSet would have done a much better job and taken a fraction of the time. It makes me want to call the development police to report ORM abuse.

Which is to Say

Don't let all that detract from Udi's point, though. He's right that for seriously complex enterprise-level operations, you can't really get around the fact that you need good architecture for which datasets will likely be inadequate. Relying wholly on DataSets in that case will get you into trouble.

I personally think that you could get away with datasets being the communication objects between web services in most cases even so, but I also realize that there are serious weaknesses in this approach. It works best if the application is confined to a single enterprise domain (like order processing or warehouse inventory management). Once you cross domains with your objects, you incur some serious side-effects, not least of which is that the meaning of your objects (and the operations you want to perform on them) can change with context (sometimes without you knowing it--want an exercise in what I mean? Ask your head of marketing and your head of finance what the definition of a "sale" is--then go ask your board of directors).

So yeah, DataSets aren't always the answer. I'd just prefer if more developers would make that judgement from a standpoint of knowing what DataSets are and what they can do. Too often, their detractors are operating more from faith than from knowledge.*

*Not that this is the case for Udi. For all he has admitted that he isn't personally terribly familiar with datasets, his examples are pretty good at delineating their pressure points and that tends to indicate that he's speaking from some experience with their use in the wild.

 

21. November 2006 19:23 by Jacob | Comments (0) | Permalink

4 Solutions to DbConcurrencyException in DataSets

Following links the other day, I ran across this analysis of DataSets vs. OLTP from Udi Dahan. His clincher in favor of coding OLTP over using datasets is this:

The example that clinched OLTP was this. Two users perform a change to the same entity at the same time – one updates the customer’s marital status, the other changes their address. At the business level, there is no concurrency problem here. Both changes should go through.When using datasets, and those changes are bundled up with a bunch of other changes, and the whole snapshot is sent together from each user, you get a DbConcurrencyException. Like I said, I’m sure there’s a solution to it, I just haven’t heard it yet.

I thought about this for a minute and came up with four solutions for DbConcurrencyException in this scenario using DataSets (though the first two are essentially the same and differ only by who actually implements it). I'm sure there are others, but this should do for starters.

  1. Use stored procedures created by a competent DBA that utilizes parameters for the original and new column state. This means that you check each field with a "OR (<ds.originalValue> = <ds.updateValue>)". This solution passes the same two parameters per field as an "optimistic" pre-generated update statement but it makes the update statement larger by adding this new "OR" condition for each field.
  2. You can do the same by altering a raw update generated from the DataSet designer. This means sending a longer select to the database each update though this can be offset by setting your batch size higher if you have lots of updates you're sending (uh, you'd need ADO.NET 2.0 for that). I'd hesitate to use this method but that's mainly a personal taste issue than anything else (because I'd prefer using stored procedures and recognize that internal network traffic generally isn't the bottleneck in these kinds of transactions, though on-the-fly statement execution plan creation could be).
  3. Override the OnUpdating for the adapter to alter the command sent based on which fields have actually changed. This is probably the closest in effect to the OLTP solution envisioned by Udi. This solution is problematic for me simply because I've never actually tried to do it and I'm not sure you can hook into the base adapter updates each execution. If you can't, an alternative (in ADO.NET 2.0) would be to create a base class for the table adapters and create an alternative Update function in derived partial classes. In this case, you'd have "AcceptFineGrainedChanges" or some such function that you'd call. Once the alternative base class was created, custom programming per table adapter would be a matter of a couple moments. I've done something similar for using the designer for SyBase table adapters and it worked out pretty well. I'd have to actually try this to make sure it'd work though. Call this two half-solutions if you're feeling stern about it.
  4. This last would be useful if I have a relatively well-defined use case that isn't going to morph much or require stringent concurrency resolution. In this one, you deliberately break the one-for-one relationship from your dataset and database (i.e. one database table can be represented by multiple dataset tables). In Udi's concurrency example, the dataset would have a CustomerAddress table and a CustomerStatus table. Creating the dataset with custom selects would generate the tables pretty painlessly with appropriate paranoia. Now, this only really pushes his concern down a little, making it less likely to be an issue. It doesn't eliminate it. It'd probably handle most of the concurrency problems people are likely to run into. Or at least, push them out beyond where most people will ever experience it (not quite the same thing). It could be taken to a rediculous extreme where each field was it's own datatable (which is just silly, but I've seen sillier things happen) so a little balance and logical separation would be needed.

OLTP may seem more natural as a solution for many, but that's likely an issue of preference and sunken costs (because they've done it before and are comfortable with that solution space). It certainly isn't the only solution, though, nor is it a stumper for datasets.

Finally, I’ll add a caveat that I'm not saying that datasets are necessarily to be preferred over stronger object models. I just know that they get pretty short shrift from "real" developers in these kinds of discussions and want to make sure that the waters remain appropriately muddied. There may be a universal stumper for datasets I don't know about. There are certainly environments where a formal OLTP or ORM tool would be a legitimately preferred solution.

 

Technorati tags: , , , , ,
21. November 2006 05:35 by Jacob | Comments (0) | Permalink

Two Things I Regret

Have you ever been in an interview and gotten some variation on the question "What do you regret most about your last position?" Everyone hates questions like that. They're a huge risk with little upside for you. You're caught between the Scylla of honesty and the Charybdis of revealing unflattering things about yourself.

Still, such questions can be very valuable if used personally for analysis and improvement. In that light, I'll share with you two things I regret about my stay at XanGo. Since I've ripped on the environment there in the past, it's only fair if I elaborate on things that were under my control at the time--things I could have done better.

Neglecting the User

Tim was the Senior IT Manager (the position that became IT Director once XanGo had grown up a bit). He was the best boss I ever had. His tech skills were top-notch (if somewhat "old school"). In addition, he knew his executives and how to communicate with them on a level they understood. It was a refreshing experience to have someone good at both technology and management (and since he's no longer my boss, you can take that to the bank :)).

After a little break-in time as the new Software Development Manager, Tim and I discussed what we needed to do for the organization. Tim's advice was to establish a pattern of delivering one new "toy" for our Distributors each month. He said that the executive board are very attached to the Distributors and that keeping things fresh and delivering new functionality and tools to them each month would make sure that we had enough of the right kind of visibility. Goodwill in the bank, so to speak.

This sounded like a great idea, and frankly, "toy" was loosely defined enough that it shouldn't have been a hard thing to do. It turned out to be a lot harder than expected, however. In my defense, I'll point out that we were experiencing between 15% and 20% growth per month and that we had done so since the company had started a year and a half before. That growth continued my entire tenure there. Now, if you've never experienced that kind of growth, let me point out some of what that means.

First off, using the Rule of 72 (the coolest numeric rule I know) will tell you that we were doubling every 4 to 5 months (in every significant measure--sales, revenue, Distributors, traffic, shipping, everything).

In case you've never experienced that kind of growth, it feels like ice-skating with a jet engine fired up on your back. Even good architecture will strain with that kind of relentless growth. When this happens, you become hyper-vigilant for signs of strain. This vigilance has sufficient reality to be important to maintain. Unfortunately, it also makes it easy to forget your users.

Developers like to live in a pristine world of logic and procedure. Unfortunately, life, and users, aren't like that. If they were, there'd be less need for developers. Users don't see all the bullets you dodged. They take for granted the fact that a system originally designed for a small start up is now pulling off enterprise-level stunts. They don't see it, so it doesn't exist. It is very easy to get caught up in the technology and forget that often it is the little touches that make your product meaningful. Sometimes the new report you spent an hour hacking together means more than the three weeks of sweating out communication with a new bank transaction processor. And by means more, I mean "is more valuable than".

Not that you can afford to neglect your architecture or needed improvements to sustain the needs of the company and prepare for foreseeable events. If you ignore that little glitch in the payments processing this month, you have really no excuse when it decides to spew chunks spectacularly next month.

What I'm saying here is that you have to balance functionality with perceived value. You have to know your users and their expectations because if you aren't meeting those expectations, no amount of technical expertise or developer-fu is going to help you when things get rough. In the case of XanGo, I could have afforded to ease up on the architecture enough to kick out monthly toys for the users. Yeah, some things would have been a touch rockier, but looking back there was room for a better balance.

Premature Deprecation

When I arrived at XanGo, our original product (a customized vertical market app written in VB6 on MS SQL Server) was serving way beyond its original specifications. We'd made some customizations, many of them penetrating deep into the core of the product. Our primary concern, however, was the Internet application used by our Distributors in managing their sales. We spent a month or two moving it from ASP to ASP.NET and ironing out bugs brought on by the number of concurrent users we had to maintain. We also removed the dependence on a couple of VB6 modules that were spitting out raw HTML (yeah, I know. All I can say is that I didn't design the monster).

Anyway, after that was well enough in hand, we gave a serious look to that VB6 vertical market app. Since VB6 wasn't all that hot at concurrent data access and couldn't handle some of the functionality we were delivering over the new web app, we decided that it should be phased out. Adding to this decision was the fact that we had lost control of the customizations to that app and what we had wouldn't compile in its present state.

Now developers (and for any management-fu I may have acquired, I remain a developer at heart) tend to be optimistic souls, so we figured "no big", we'll be replacing this app anyway. And we set to work. Bad choice. In a high growth environment, the inability to fix bugs now takes on a magnified importance. Replacing an application always takes longer than you expect if only because it's so easy to take the current functionality for granted. Any replacement has to be at least as good as the current application, and should preferably provide significant, visible improvements.

The result of this decision was that we limped along for quite some time before we finally came to the conclusion that we absolutely must have the ability to fix the current app. We paid a lot of political capital for that lack. In the end, it took a top developer out of circulation for a while but once it was done, it was astonishing how much pressure was lifted from Development.

It's the Users, Stupid

No, I did not mean to say "It's the stupid users." When it comes right down to it, software exists to serve users, not the other way around. As developers, it is easy to acquire a casual (or even vehement) dislike of our users. They are never satisfied, they do crazy stuff that makes no sense, and they're always asking for more. It's tempting to think that things would be so much better without them.*

I got into computers because I like making computers do cool stuff. Whatever gets developers into computers, though, it's a good idea to poke your head up periodically and see what your users are doing. Get to know who they are. Find out what they think about what you've provided for them. Losing that focus can cost you. Sometimes dearly.

*I think one of the draws of Open Source is that the developer is the user. It's also the primary drawback. But that's a post for another day.

 

17. November 2006 20:10 by Jacob | Comments (0) | Permalink

More Validation

It's always nice to have your opinions confirmed by someone you respect. Joel Spolsky's latest series on recruiting developers has a final section that includes essentially the same point I made a couple months ago: that specific technologies aren't as important in hiring developers as general savvy is. He even issues the same caveat that you do need domain experts for leavening. Nice. As is usual with Joel, he includes a thorough analysis of the issue he explores so there's a lot of good rumination there.

 

8. September 2006 09:59 by Jacob | Comments (0) | Permalink

Professional Integrity

Lidor Wyssocky has some good thoughts on why it is that developers don't implement changes that they know would be helpful.

The problem is that although we know exactly what doesn’t work right and how it should be fixed, most of us will never say anything. We don’t say anything because there’s a very good chance the minute we do we will be marked as uncooperative, pessimistic, or simply detached from the business reality. (emphasis in original)

He concludes with his call to action.

If more of us say what we know in our hearts to be true, the rest won’t be able to ignore it anymore. Hopefully.

And I agree with him. What he is talking about is an aspect of professional integrity. If you are a professional that a business relies on for good decision making, then you need to act the part and provide reasonable suggestions wherever they may do good.

But there exists both a flip-side and a problem with Lidor's points that are inherent in professional integrity that need to be explored.

The Flip-side

While you avoid negative attention by remaining silent about needed process change, you can gain positive attention by supporting whatever has caught the eye of your executives. This is a major contributing factor to "The Silver Bullet Syndrome". At XanGo for example, we had one company convince management that they could cut development costs to 10% of their current level. Not 10% off, mind, that's 10% of current levels.

Despite all logic to the contrary (I mean, think about it--if a product could actually deliver cost savings of 90%, every company on the planet would be using it--they'd be incompetent not to), XanGo ended up spending millions of dollars and wasted a full year of development on this silver bullet. What was most fascinating to me were those who were willing to endorse this silver bullet. Those people were promoted to team leaders and project managers. Any who would oppose the new development project were either removed before its introduction or told (in at least one case explicitly) that negative feedback would put their jobs in jeopardy.

In this way, technical people who should have known better ended up perpetrating the problem not by staying silent but by actively promoting bad solutions. They were rewarded for doing so. Here's the thing: in most cases, there really is no price to pay for backing a losing silver-bullet solution. Oh, those in front of the board have negative exposure when it all blows up. Some of the higher management tiers are at risk as well. But for most of IT, there simply is no bill to pay for being in the pack.

The Problem

Here's the thing: having Professional Integrity carries not-insignificant risks.

  1. Having integrity makes you predictable. Those with an agenda can plan their actions confident of what you will do. That means that if things get nasty, you end up following Marquess of Queensberry rules in a bar fight.
  2. Your compensation is in the hands of ignorant people. They don't know what you know. Many of them may be smarter than you. Even if they aren't, though, they are still the ones signing the paychecks.
  3. There is seldom a single, obvious best answer. Honest and rational people can arrive at differing conclusions based on the same information (because they weight aspects of the problem domain differently). This can easily split the message received by management and open the door for bad solutions. This also leaves you vulnerable to the manipulative because they can confuse the issue at need.
  4. Costs of bad solutions are often deferred while the costs of changing minds is immediate. Whether opposing a bad proposal or proposing a better process you are asking for change and that makes people uncomfortable. If you make people uncomfortable they will often hold you responsible for their discomfort.
  5. Sometimes having professional integrity means looking for a new job. Whether you are fired or are forced to resign, some positions are simply incompatible with maintaining professional integrity. These situations are rare, but they can happen.
  6. People aren't rational. As a result businesses often make choices that seem irrational. Sometimes those choices are right even though they are irrational. Professional Integrity in such a situation is problematic at best.

As I see it, Lidor's call to arms makes sense but is unlikely without more support. The thing is that the problem is bigger than he has acknowledged. He is asking us to take actions that are overtly risky and he hasn't given us enough of a reason to do so. I mean, the emperor may not have any clothes, but there's no telling how he'll react to you pointing that out.

The Solution Domain

"Because it's the right thing to do" resonates with most of us, I think. But it is important to acknowledge that, at heart, Lidor's plea is a moral and not a technical one. As such, it needs to be approached from a social, not a technical standpoint. The thing is, we are equipped to make this happen, but we need to explore it in the right context if we hope to make progress.

Societies have a lot of experience coercing correct behavior. There are a lot of tools available, some better than others. My preferences tend to flow from my LDS background. As such, I'd advocate proselytizing correct procedures. We should explain as clearly as we can the benefits of acting correctly. While the payoffs of acting correctly are long-term, they exist nevertheless and are provable and measurable. Professional Integrity in IT has similar long-term value (above and beyond correct practices) to businesses and those who learn to identify and value it will gain the benefits it brings. A company that hires professionals with integrity will do better than companies that hire similarly talented people who lack that dedication.

As long as we're borrowing from religious tradition, lets hit up the Catholics and/or Jews (leaving no stereotype unturned) and apply some well-earned shame. I know who sold us out at XanGo. My reaction to them in the future can (and should) be informed by their actions there. I think we do ourselves a disservice if we know compromised IT professionals and we don't tell them our opinion of their actions or make those opinions public. Repentance (more religious terminology, but applicable) should be possible, certainly--people can change and reformation should be encouraged. That said, it's important that forgiveness be contingent on honest efforts towards reparation and change.

On a more general level, ostracizing those who behave badly is one of society's greatest tools to restrain bad behavior. The next time I review resumes to evaluate new hire candidates, I'll have an eye out for those I know who, while they may be bright and talented developers, ended up endorsing those things that were bad for the business. More generally, I'll try to craft questions that explore a candidate's professional integrity in addition to their technical expertise. Reputations in IT should be more than merely technical. Professional Integrity is important not just to businesses but to the perception of IT as a whole. The IT crash in 2001 contained a lot of payback for abuses in IT during the build-up of the bubble. While we were on top many of us made unreasonable and expensive demands on companies (some personal, some technological) and those abuses weakened us as a group.

Collective bargaining is one route that I'm sure will come up--a union or standards body that enforces "correctness". Personally, I'm not a fan of that option. Bureaucracies are inefficient and tend to be in opposition to business interests. We do not need more antagonism from the executive suite. Unions also have a tendency to groupthink that I consider inimical to technology innovation.

I'm sure there are others that I am overlooking. I'd love to hear further thoughts in the comments. Please leave 'em if ya got 'em. Don't make me beg.

I acknowledge that my thoughts here will be hard to implement. They require conscious short-term effort with no promise of reward. It's certainly beyond the scope for a single individual to affect. Still, I believe it's possible to make things better than they are right now and that the potential for reward is both real and achievable. Plus, Paladins are made for exactly this kind of idealistic fight for what's right...

 

28. August 2006 21:23 by Jacob | Comments (0) | Permalink

Experienced Developers

The following applies mainly to in-house business software development. It might or might not apply to ISV or other product development houses. I think that there's room for broad application, but you can hit my list of software blogs if you want some quality sources for more generalized ISV or product development exploration.
Back when I was looking to hire developers a couple years ago, I knew some programmers that I wanted on my team. I had worked with them before and knew what they were capable of. Unfortunately, they didn't have much experience with .NET--our platform at the time. I made the case then (to my boss and HR) that a good developer is a good developer and the language and framework are more or less immaterial for people with a broad set of experience. The syntax and capabilities of the language and environment are teachable and a developer's broad experience helps them pick it up quickly.
I was fortunate enough to bring one of those developers (who is currently working in indie games) onto my team. The experience was great. I was right that he was able to come up to speed and pick up the nuances of the environment very quickly. Further, he was able to solve some issues outside of the framework in ways that saved us a lot of time (using Python for text file manipulation for example).
So does this mean that you can toss out environment-specific skill sets when evaluating who to hire? Not in my opinion. You see at that time we already had a core group that was intimately familiar with the capabilities of .NET. Domain-specific knowledge is crucial in producing software applications that actually do what they are supposed to do in a reasonable timeframe. People who have already digested the learning curve are invaluable in blazing the way for others who haven't yet done so. The reason specialists are crucial is simple--somebody who has already walked the trail is able to point out the pitfalls and shortcuts and help the whole team move quickly. The new guy was able to come up to speed so fast because he had people he could ask when he had questions and who could explain why things worked the way they did.
I came out of this experience with respect for having a diverse programming team. You have to have at least some specialists who know the environment inside and out. These specialists are going to be the limit of what you can accomplish in a lot of ways so they need certain skills--chief among them, the ability to communicate with other developers (because they'll do more of that than they probably want to and because a specialist who can't communicate isn't able to influence the project nearly as well as one who can).
But I personally don't think it is a good idea to have a team or department composed solely of specialists. Any software development that requires more than two people is going to run into problems in a wide array of domains and having a broad skill set to draw on can be very useful. What proportion of specialists to generally kick-ass developers do you need? Well, that's a good question. I welcome ideas in the comments, but I suspect this is one of those things that is highly contextual and thus at the heart of the "art" of software development. I'm thinking that a narrowly defined ISV benefits with a higher ratio of specialists than, say, a multi-level marketing company looking to manage their necessarily custom business needs.
25. July 2006 19:30 by Jacob | Comments (0) | Permalink

Calendar

<<  October 2014  >>
MoTuWeThFrSaSu
293012345
6789101112
13141516171819
20212223242526
272829303112
3456789

View posts in large calendar