API's that Suck

June 3, 2012

I am a computer programmer. I read for a living.

Filed under: Uncategorized — Grauenwolf @ 12:44 am

My job is reading. Sometimes I write, but mostly it’s about reading. Every day I start by reading my email to see if anything blew up over the night. In the past I would spend each breakfast poring over ops reports, but these days it is rare that I need to do any production support work.

Hopping on the train, I’ll spend the next hour reading the 300+ blogs and news feeds that I actively track. Most articles are just skimmed and forgotten, but the ones that are potentially news worthy are flagged, reread, and ultimately forwarded to our lead tracker.

Showing up at work, I put InfoQ aside and turn to the needs of our clients. This usually starts by reading requirements documents, bug tracker entries, and other such task-based documentation.

I am primarily a backend developer. This means I’m constantly being tasked to deal with interoperability between our new software, existing client software, and countless third-party systems. Often this means reading through 100+ page specifications that may be accurate, but are most likely complete works of fiction. That will take me to lunch.

During my lunch break I read bookmarking sites, mostly Reddit. I do this in part to blow off steam by arguing about pointless topics, but also to find out if all those blog entries I’ve been reading are legit or bullshit.

After lunch I start reading code. You see, I’m not just any old backend developer. I’m the backend developer that’s routinely assigned to ongoing projects. While others are building new software, I find myself tasked with figuring out why the client’s 20 year-old PowerBasic application is having performance problems. Or maybe I’m tracking down race conditions in a Silverlight application that is so big it looks like it is 20 years old. I read a lot of code.

Check-in a bug fix and then read some more. I mentioned third-party systems, right. Well generally speaking “third-party” means “you don’t get a development environment”. A lot of my code never runs outside of production, even when that code deals with million dollar trades. Like a human computer, I trace through each line of code executing it in my mind. I don’t even bother with mock testing anymore; mocks just trick me into thinking that I know more about the black-box on the other side of the connection than I really do.

Signing off for the day, I’m back on the train home. (Oh fun, more half-baked blogs to read.)

Eat dinner; then pick up a book to read. Often it’s a new technology and I’ve got to interview the author. Right now I’ve got a half-complete interview for Lightswitch on my desk and a pair of books about Go on the nightstand.

Other times the book is a work in progress for APress. In publishing a technical editor is a proof reader for grammar and syntax. Once they are done they hand the draft over to me, the technical reviewer, the proof reader for code and content. I have to read and reread every line of code in a 500+ page book on C#. There are no bug patches for a dead-tree book, I’ve got to get this right or my author will look like an idiot. Fortunately I usually get soft copies so I can at least paste some of it into a compiler. Back in the 90’s I would often get printed drafts to markup in pencil and overnight back to the publisher.

After the books I start on InfoQ’s educational articles. InfoQ doesn’t have the resources of a major book publisher, so I’ve got to play the role of both technical reviewer and technical editor. And until we get some more leads (yes, we’re hiring), I’ve got to handle JavaScript, PHP, Objective C, Go, Android-style Java, and any esoteric language that our readers happen to find interesting this month. Spotting bugs in languages that you don’t really know, without access to the right compilers, takes quite a bit of practice. And a lot of reading.

Time to sleep? No. Next I’ve got to read the news reports by my reporters in training. And I’ve got to read all of their sources to make sure they are accurately reporting what’s going on.

I am a computer programmer. I read for a living.

Post script: And now that I’m actually done working for the day I find myself with a longsword in one hand and a 16th century German fencing manual in the other.

November 28, 2011

100% Code Coverage – What a pain in the ass

Filed under: Granite — Grauenwolf @ 8:10 pm

When I decided to create a 1.0 version of Granite I told myself I wouldn’t do it without full code coverage. Well today I finally hit it:

image

In order to meet this goal before checking into the retirement home I had to strip out every feature that I didn’t absolutely need. This was much harder than I was expecting, the code was actually growing faster than the unit tests. But I guess that’s the nature of research projects.

October 3, 2011

Rules for Writing Automated Tests: Label your Assertions

Filed under: Testing — Grauenwolf @ 6:37 pm

Before one can determine how a test has failed they need to determine what part of it failed. Simply having the message “expected blah, but got foo” doesn’t really help someone reading the test results, they need to know why blah was expected in the first place.

Single Assertion per Test

There are several ways to label assertions. One of the more popular ideas for the last few years has been the “single assertion per test” theory.  Under this pattern the test name itself is the label for the assertion, no further clarification is needed.

Unfortunately this leads to a lot of repetitive code.  Consider this simple test to ensure that an object can be serialized to XAML and back without losing any information.

[TestMethod()]
public void XamlParseCreateTest()
{
    var original = new Foo() { FooBar = "Sam" };
    original.Bars.Add(new Bar() { Weight = 5, Cost = 1.1M });
    original.Bars.Add(new Bar() { Weight = 10, Cost = 2.2M });
    original.Bars.Add(new Bar() { Weight = 15, Cost = 3.3M });

    var xaml = XamlUtilities.CreateXaml(original);

    var copy = XamlUtilities.ParseXaml<Foo>(xaml);

    Assert.AreEqual(original.FooBar, copy.FooBar);
    Assert.AreEqual(original.Bars.Count, copy.Bars.Count);
    for (int i = 0; i < original.Bars.Count; i++)
    {
        Assert.AreEqual(original.Bars[i].Weight, copy.Bars[i].Weight);
        Assert.AreEqual(original.Bars[i].Cost, copy.Bars[i].Cost);
    }
}

Even though this is an incredibly simple test using the “single assertion per test” theory would require writing 8 separate tests. Even worse, each additional property added to Bar would require three more tests.  These tests are not only time consuming to write, they are also more expensive to run and create an unnecessarily high maintenance burden.

Labeled Assertions

A far more effective means of ensuring that the right information is conveyed is to use the message parameter on the assertion.

[TestMethod()]
public void XamlParseCreateTest()
{
    var original = new Foo() { FooBar = "Sam" };
    original.Bars.Add(new Bar() { Weight = 5, Cost = 1.1M });
    original.Bars.Add(new Bar() { Weight = 10, Cost = 2.2M }); 
    original.Bars.Add(new Bar() { Weight = 15, Cost = 3.3M });

    var xaml = XamlUtilities.CreateXaml(original);

    var copy = XamlUtilities.ParseXaml<Foo>(xaml);

    Assert.AreEqual(original.FooBar, copy.FooBar, "String field was not copied");
    Assert.AreEqual(original.Bars.Count, copy.Bars.Count, "Collection count is wrong");
    for (int i = 0; i < original.Bars.Count; i++)
    {
        Assert.AreEqual(original.Bars[i].Weight, copy.Bars[i].Weight, string.Format("Integer property Weight at Bars[{0}] was not copied", i));
        Assert.AreEqual(original.Bars[i].Cost, copy.Bars[i].Cost, string.Format("Decimal property Cost at Bars[{0}] was not copied", i));
    }
}

April 24, 2011

Why doesn’t anyone know how to implement the factorial function in C++ with proper error handling?

Filed under: Uncategorized — Grauenwolf @ 8:48 pm

Not too long ago I asked this question on Stackoverflow,

How do you implement the factorial function in C++? And by this I mean properly implement it using whatever argument checking and error handling logic is appropriate for a general purpose math library in C++.

Of the five answers I got only four actually showed any code. Of those not a single one was production quality, they all all just cute ways to answer homework problems. In fact someone tagged my question as “homework” and others closed it as a duplicate. Not a single one correctly handled n=-1, nor did any of the examples in the questions they linked to.

Seeing that the moron squad didn’t even bother reading the question and only glanced at the headline I asked again. This time making sure to include “with proper error handling” in the title. I got two more answers, this time with no code what so ever and again no discussion on proper error handling in C++.

Why are applications and operating systems still so unreliable after all these years? Possibly because the C++ community that they are based still thinks “undefined behavior” is the correct response whenever someone passes in invalid arguments.

March 12, 2011

Design Driven Development

Filed under: Uncategorized — Grauenwolf @ 5:56 pm

Step 1: Requirements

Read the requirements. All of them. Breath them in and get a feel for what the customer really wants. And how that changed since you last read them all.

Once you feel the Tao of the project, grab a block of related requirements. How big of a block? Whatever feels right. We call this a “feature”.

Step 2: Design the Features

Start by writing the use cases, the step by step explanation of how the feature is going to actually work from the user’s perspective. Then start adding any database schema, class diagrams, test cases, flowcharts, and anything else you need to understand the feature. The key word here is “you”. No one else needs to see the design documents. Well, unless you happen to be using them a prop to show why they are high.

Step 3: Bitch about the Requirements

Invariably the requirements are going to have holes, areas that are unclear, underspecified, or written by someone who is clearly high. The only purpose of the design process is to find those holes and get them fixed. Once you fill your written design with highlighted question marks, crack open a beer and relax. It is going to take product management ages to figure out what the hell they really wanted you to build.

Step 4: Review the Design

Get a good night’s sleep. Or party till you drop. It doesn’t really matter, just so long as you let the design age a bit.

Now take a look at it. Does it still make sense? Or was it clearly written by a crack head and need to be redone?

Step 5: Hacking

Now you know exactly what you want its time to grab some energy drinks and start writing the code. If you are using automated testing now is the time to slap them together. Heck, you can even go full TDD at this point. I prefer ADD, but that’s just me. Just don’t forget to update the design documents with whatever edge cases or significant redesigns you happen across.

Step 6: Testing

Remember all those use cases and test cases you wrote back in step 2. Of course not. Go look at them again, I’ll wait…

Now really test your code. Automated what-you-ma-call-its don’t count at this point. Actually go through the requirements and design documents and verify everything matches what you expected.

Step 7: Documentation

Find your tests cases can give them to the QA department. Then throw everything else away. You know damn well you aren’t going to keep it updated, so don’t even bother pretending you will do otherwise.

February 27, 2011

CIL Parameter Passing: By-value, By-ref, and Typed Reference

Filed under: Uncategorized — Grauenwolf @ 1:33 am

The common language runtime supports three kinds of parameter passing.

By-value

By-value parameters work as one would expect. Primitive types are simply copied onto the stack, as are value types, object references, managed pointers, and unmanaged pointers.

CIL Type: typename

By-ref

By-ref parameters of course references to other values. Any of the types that can be passed by value can also be passed by reference. However there certain restrictions. In order to get a reference to a value, that value must have a home. A home can be any of the following:

  • An incoming argument
  • local variable of a method
  • An instance field of an object or value type
  • A static field of a class, interface, or module
  • An array element

At first glance the list appears to be comprehensive, but something is missing. Intermediate values, those which are on the stack but are neither a local nor a argument, do not have a home and thus cannot be referenced.

CIL Type: typename&

Typed Reference

A typed reference parameter is a reference to a home matched with the argument’s type information. Normally it isn’t necessary to explicitly pass the argument’s type information as one of the following is usually true:

  • The argument is a value type that matches the parameter type
  • The argument is a reference type and thus the type information is encapsulated.
  • The argument is a boxed value type, again with the type information encapsulated.

It is only when the argument is an unboxed value type that doesn’t necessarily match the parameter type that a Typed Reference is needed. This will never occur in a early-bound language such as the original version of C#, but it can in late-bound language such as VB with option strict turned off.

does and not

one that combines a by-ref parameter with extra type information. The reason this is necessary is that it is possible for an argument to otherwise lose it’s type. This occurs when…

  1. The parameter is a value-type. Value-types do not internally store their type unless boxed.
  2. The parameter is passed without boxing.
  3. The argument’s type doesn’t necessarily match

CIL Type: typedref
CLR Type:
System.TypedReference

February 13, 2011

Meditations on Testing

Filed under: Uncategorized — Grauenwolf @ 1:20 am

There is never enough time, so first test that which you are uncertain about.

If it is easy to test, it is easy to write. If it is easy to write, it didn’t need to be tested.

Test the way the program will be used, not the way it should be used.

A wise student tests his code. A wise master tests everyone else’s.

A test is not a debugger. Nor is a debugger a test. Learn them both.

The student saw asks his master, “Why do our tests take so long to run?”
The master replies, “The services are slow.”
The student then asks, “Why don’t you mock the services so we can run the tests faster?”
The master deletes all of the test code and says, “The tests are now fast.”

All combinations of user input should be tested. Configuration files are user input.

To not test what the compiler can prove.

Play with the clock.

If a child can break it in an hour, and adult can in 5 minutes.

January 29, 2011

Foundry – Goal: Build a compiler that will print “Hello World”

Filed under: Uncategorized — Grauenwolf @ 8:52 pm

The goal is to create a CLR compiler for Foundry, my research project for this year.

Source File

Program Test1
    References
    mscorlib
    End References
End Program
Imports 
    System

End Imports
Function Main 
    Let message = “Hello World”
    Console.WriteLine(message)
End Function

Tasks

  1. File level parser. This needs to understand import statements and function blocks, but not the contents of functions.
  2. Level 1 symbol table. Requires reflecting over core assemblies
  3. Parsing let-style variable statements. This will require an abstract syntax tree with nodes that can later be annotated with type information.
  4. Level 3 symbol table for storing locals. (Level 2 is for class-level symbols, which donof’t apply to free functions. Levels 4+ are for nested structures)
  5. Parsing simple function calls.
  6. Emitting a free function as IL code.

Abstract Syntax for File-Level Parser

<Import> <Identifier>

<Function> <Identifier “Main”>

<Function-Body>

<End Function>

The file-level parser doesn’t concern itself with the contents of functions. This allows us some degree of error recovery, as one bad function will not prevent us from parsing the remainder of the file.

Proposed object model (using XML notation because I don’t feel like drawing diagrams tonight)

<File>

    <Imports><Import Namespace=”xxx”></Imports>

    <Free-functions>

        <Function Name=”Main” />

    </Free-functions>

</File>

Deadline: My self-imposed deadline is March 12, which allows for weekly milestones.

January 3, 2011

The Story of UltraBase: Chapter 4

Filed under: UltraBase — Grauenwolf @ 8:35 pm

Fred didn’t like project references. In fact, he absolutely hated project references. So when he started on the middle tier he made sure his team only built their project using assembly references. Moreover, those assembly references all had to point to the golden location, a shared drive where releasable code was posted.

In some cases this makes a lot of sense. Especially when the purpose of each assembly is clearly delineated and low level assemblies are completely unaware of the assemblies that use them. But this situation is a little bit different.

To start with, you need to know about the six assemblies that make up the middle tier. From highest to lowest they are the Adapter, Core, BusinessLogic, DAL, Config, and Data libraries. Each library builds upon the ones before it. For example, in order to add a UserSearch function to the Adapter and BusinessLogic layer you must first add a UserSearch entry to the DataMapperEnum found in the Data Library. And all of the parameters must exist on a UserSearchInput object found in the DAL assembly.

I’ll walk you through the process. First get the UserSearch stored procedure from the database team. Then enter in its name, parameters, and output columns into the UltraBase configuration database. Since databases don’t really work with source control, make sure you kick off a backup.

Open up the Data libraries solution and update the version number. Kick off the code generation process, and then check everything in.

Once the build machine is done, copy the new version of Data into the golden location. Then open config library and update its version number. Change its reference to point to the new version of Data, regenerate the code, and check it in.

Once the build machine is done, copy the new version of Config into the golden location. Then open DAL solution and update its version number. Change its reference to point to the new version of Config and Data. Regenerate the code and check it in.

Once the build machine is done, copy the new version of DAL into the golden location. Then open BusinessLogic solution and update its version number. Change its reference to point to the new version of DAL, Config, and Data. Regenerate the code and check it in.

Once the build machine is done, copy the new version of BusinessLogic into the golden location. Then open Core solution and update its version number. Change its reference to point to the new version of BusinessLogic, DAL, Config, and Data. Regenerate the code and check it in.

Once the build machine is done, copy the new version of Core into the golden location. Then open Adapter solution and update its version number. Change its reference to point to the new version of Core, BusinessLogic, DAL, Config, and Data. Regenerate the code and check it in.

Once the build machine is done, you can deploy the new version of the middle tier.

Once the build machine is done, copy the new version of Core into the golden location. Then open Adapter solution and update its version number. Change its reference to point to the new version of Core, BusinessLogic, DAL, Config, and Data. Regenerate the code and check it in.

With all this done, you can change your website code to use the middle tier.

December 30, 2010

The Story of UltraBase: Chapter 3

Filed under: UltraBase — Grauenwolf @ 8:35 pm

One of the best things about Fred’s UltraBase code generator is that it really pushes the boundaries of what’s possible in C#. The first couple of times I tried to open the ProductDomainMapper class is actually crashed my IDE. After disabling every extension I had I was finally able to see the code. I’m still not entirely sure why Visual Studio was crashing, but I think it may have something to do with the 92 interfaces declared on the class:

public partial class ProductDomainMapper : BaseDomainMapper,
    ISelect <AmountCategorySelectData,AmountCategorySelectInput>,
    ISelectByCode <BondOfferingSystemServiceModelSelectForRepData,BondOfferingSystemServiceModelSelectForRepInput>,
    ISelect <BondQualitySelectData,BondQualitySelectInput>,
    ISelectByKey <DisclaimerSelectByKeyData,DisclaimerSelectByKeyInput>,
    ISelect <IDCRankGroupSelectData,IDCRankGroupSelectInput>,
    IUpdate <OfferingUpdateTotalQuantityData,OfferingUpdateTotalQuantityInput>,
    ISelect <RatingCategorySelectData,RatingCategorySelectInput>,
    ISelectByCode <GetKeyData,GetKeyInput>,
    ISelectByKey <BondReportItemSelect1346Data,BondReportItemSelect1346Input>,
    ISelectByCode <AllBondsSearchData,AllBondsSearchInput>,
    ISelectByCode <CanadianSearchData,CanadianSearchInput>,
    […]
    IInsert <BondDisclosureMessageQueueInsertData,
    BondDisclosureMessageQueueInsertInput>

In case you were wondering what all these interfaces did, here is the complete implementation of one of them:

public IResponseDTO<AmountCategorySelectData> Select(IRequestDTO<AmountCategorySelectInput> input)
{
    var mapper = DataMapperFactory<ISelect<AmountCategorySelectData,AmountCategorySelectInput>>(DataMapperEnum.AmountCategory,BehaviorTypeEnum.Select);
    if (mapper != null)
        return mapper.Select(input);
    return InvalidConfig<AmountCategorySelectData>();
}

And no code sample would be complete without showing how the code is called:

var mapper = new ProductDomainMapper();
var input = new RequestDTO<AmountCategorySelectInput>();
input.Input = new AmountCategorySelectInput ();
input.Input.SortAscendingFlag = 1;
IResponseDTO<AmountCategorySelectData> result = mapper.SelectByCode(input);

Older Posts »

Blog at WordPress.com.