Viewstate Effects on Search Engines – Part 2

It has been 12 days since Dave put up his viewstate test pages with the keyword Arkliode.  Watching Google each weekday has brought about a few interesting thoughts

  1. Initially the index page with a link to each of his tests ranked #1.  This went on for most of last week.
  2. One of the tests ranked under the index page when you click on the “View similar results” link on the initial Google search.
  3. My original post was ranking #2 for most of last week.  It only had the word “arkliode” once in one of the comments.
  4. Arkliode results snapshot on 6/11/2007This morning, my original post shows up #1.  Yes, that is the post with the word only displayed once in a comment.
  5. Google gives more weight to blog posts.  This has been generally accepted for quite a while.  That explains the change in #1 ranking.
  6. Incoming links play a large part in the role.  I am attributing that reasoning to the drop of the one page that was returning in the results.  The incoming links simply go to the index page.
  7. Duplicate content has been assumed to have a negative impact.  Since most of the test content pages have the same or very similar content, I am theorizing that Google is recognizing them as duplicate and since the only links going to them are from the index page.

The next question is, how do we begin to make the test valid?  I suppose we would have to post the different pages on separate sites to try to get a better idea, yet the popularity and ranking of the sites would undoubtedly play into the ranking.

Dave has changed things up a bit, which might account for the changes over the weekend. Should people begin linking to the pages on their sites to see if indexing begins to happen?  That’s what Dave did.  Here are the links to the pages.

I will check back in later this week or early next week.  I’m sure Dave will have at least one update in that time-frame as well.

Troubleshooting Memory Leaks in .NET

OK, you’re still here.  You are a brave soul if you’ve stuck around after a title like that, or else you are desperate!  That is exactly where I found myself over the last two days.

A product we are currently working on has a process that, well, processes a lot.  It goes through several different data gathering, manipulation, saving and printing operations.  The end result of this process is a print job that takes about an hour and produces about 1000 printed pages.

For the development we normally sent the jobs to a PDF printer or simply had it stop after printing 20 or 30 pages.  Finally the time came to give this a real test, a complete dry run!


I know what you’re thinking, “you should have done that in Dev at least once!”  You are right of course, however sometimes we let things slip due to schedules and pressure.  Lesson learned, I hope!

It appeared as though there was a memory leak causing the application to crash.  Monitoring the memory usage with Process Explorer confirmed this to be the case.  Now to the task of tracking it down.

I must admit that I have never had a leak like this one.  After some initial code reviews there were a few places where we were able to determine the potential for problems.   Implementing code to fix these “phantom menaces” were not successful.  Now it was time to really dig in.  The downside was, I did not know how to dig, and I didn’t have a shovel. 😦

After some searching around on Google, I ran across Finding .NET Memory Leaks by Phil Write.  It was not the easiest thing to find, but it was well worth the time.  Phil goes step by step through using sos.exe (Son of Strike) debugging extensions.   He explained the basis of how to track down what you think the problem is.  Unfortunately the problem was not that easy to find.  I ended up doing comparisons of the output of !dumpheap -stat from very early in the process and another dump from much later on down the line.  It was a tedious exercise, but a necessary one.  Finally I happened upon an object that had a large jump in it’s count between the two samples.  Now I had a place to start!  Using Phil’s instructions again I was able to find out what was holding on to a reference and implement a fix.  It also lead me to a second leak that we did not know existed and had been around for quite a while.  It turned out that the first leak that we fixed would not have been a problem if the other one had been behaving properly.

This is a good example of why bugs can be good.  The second memory leak will be taken care of within the next day or two and the product will be that much better for it.


Thanks Phil for such a wonderful and simple to understand article!

EDIT: 8/18/2010 – updated link to Phil’s article.  Thanks Aaron D for pointing it out!

Review – VI emulation for Microsoft products

If you, like me, find yourself using Microsoft products for your daily operations (or are forced to as some) yet you have a background in which you have a comfort level using vi, the *nix based text editor.

The percentage of people that prefer to use vi is probably small compared to the people using notepad, emacs or pico or other simple editor. It takes a certain amount of masochism to plow through the various commands used to move around, edit, replace etc.. inside of vi but for those of you that have that trait as I, vi gives you a productivity increase that is unparalleled, in my opinion.

Now, if only we had that in Windows!

Fortunately we do. For several years now the vi project has had a Windows text editor which I use. It is a very good implementation within the Windows environment. For editing text based files where you do not need any further functionality, I highly recommend it.

Now, on to the fun stuff!

Jon at NGEDIT Software has a few products that have made my life a lot easier.  It falls under the heading of VIEmu, the vi-vim editor emulation for Visual Studio, Word, Outlook and SQL Server.  I can tell you that, after downloading the Visual Studio trial and running with it for a few weeks, I have purchased all 3 (Word and Outlook are combined in 1 package) products.  They are wonderful!

You should not expect 100% vi-vim compatibility, there are some things that just do not work quite the same, however most of the basic and much of the advanced functionality is available.  There are a few quirks as well, such as the need to use Shift+Esc instead of just Esc to get out of some modes, but they are workable once you get use to them.

I should say that this review was not sponsored in any way, nor did Jon or anyone at NGEDIT Software know about my writing of this before publication, I do believe in full discloser of sources and sponsorship when posting (thanks Robert Scoble for the inspiration), this is purely a fan-driven review of these products.

The Word and Outlook version is a bit young, only version 1.0, however the Visual Studio product has been around a while and the SQL product just a bit less time.  So far all have been performing well and my productivity has increased, at least I believe it has.

If you are/were a vi-vim junkie living in a Microsoft world, I urge you to head over and try it out for yourself.  I think you will be happily comfortable again within the embrace of vi-vim!

Viewstate Helper from Binary Fortress Software

Wow, I am going to be accused of becoming a Scott Hanselman sycophant if he keeps up the pace of the great posts he’s had lately!  In his most recent post (as of the time of this writing) he points out a piece of software recently discovered.  After reading through his review I had to try it out for myself.  It is the ASP.NET Viewstate Helper from Binary Fortress Software.  This is a very nice tool!

It sits in the background monitoring the HTTP conversations that IE has.  It presents a historical list of the pages visted along with some stats about the size of the page and the Viewstate, if it has one.  It also allows you to double click on a page to see the decompiled version of the Viewstate.

It tries to display the viewstate in a tree view, although I have found it doesn’t always work.  It does give you a text representation that will get you what you need although you may have to search through it a bit if the viewstate is complex.

The one downside I have found so far is that it does not work with Firefox, or at least I have not happened upon how to do it.  For the time, I can live with that.  The information that it provided on a few of the sites we’ve created has already been eye-opening.

In Search Of…

A solution!  Since I have yet to find one, I’ll settle for a little rant.

If you are a developer using the Visual Studio 2005 IDE you may have run across the infamous “unable to copy file…..” error when trying to compile your solution.  If you work on anything somewhat complex with many projects in one solution, you may have experienced this a lot.

Formerly I put the blame on Visual Studio itself.  I have been informed that it is not a problem with VS, but rather an issue with the .NET Frameowork.   I’ve ready many posts about the issue, but have yet to find anything that fixes the problem.  It wouldn’t be so bad, however the project I’m currently working on has 18 projects in the solution and reloading VS every time this happens is a real productivity killer!

Whew, now that I’ve got that off my chest…if anyone finds a solution that works, please leave me a comment about it!

Development Cycle

The company that I work for has a web-based product/service that we sell, a content management system that I feel (yes I am a bit biased) is a very nice, user friendly system that empowers website owners to keep their content fresh easily.

Now that my marketing spiel is complete (and not a good one at that.  I am an engineer after all!) I will move on to my point.  How do we keep adding new features, fixing existing bugs and maintain quaility in our product?

I will preface this with a few notes.  First, we generally have a high satisfaction rate.  Our customer service is excellent and the product works well.  It is one of the more user friendly packages that I have seen that does not pigeon-hole our customers into “canned” looks.

Second, we are not perfect.  We do updates roughly every month, and while many of them go smooth, once in a while we lay an egg.  The severity of the bugs introduced and the number of customers that it affects determine if that egg is from a quail, or an ostrich.

The goal is to continue providing enhancements and features that our customers will find valuable while reducing both the number and size of the eggs that are laid, that is publishing with fewer and less critical bugs.

The way to make progress towards our goal is by using a dicipline in the development cycle to manage the risks.  It’s not complicated, but there are caveats that I will discuss in a moment.

As stated before, we publish updates on a fairly aggressive schedule which averages about once a month.  This means that the first step is to choose a target publishing date.  For example we will say the next publish in on the 30th, a Thursday.

Now that we know when we are publishing, we begin to work everything backwards.  We know that our product must go through a quality assurance (QA) cycle where users that are not developers will test and try to break the code, find the bugs and help get them fixed.  As a rough estimate we will want a week of this.  That is our next date, the 23rd.  This is an important date.  By the 23rd the development staff must be done fixing and adding features.  If something is not complete by this time, and this is the important part, it does not make it into this cycle.  This is perhaps the second most difficult part of the process which I will explain in a moment (again) with the other delayed topic.

Working off of the date of the 23rd, we have another date to calculate.  Before the product goes into QA the developers need a certain amount of time, say 4 days.  This means that by the Monday the 20th, we are only testing things that are complete.  Once again, anything not finished in our minds by this time to the point where the developers are just testing does not make it into this publish.  This is the most difficult part of the process!

So far we have the 20th where features and fixes may get dropped, and the 23th where features and fixes that were thought to be completed by the 20th but did not make it through the 4 day testing/fix/repeat process are once again dropped.  The reason these to points are so difficult has no technical reasoning, but rather psycological roots.  We (humans) want to make other people happy, and the way to do that in our business is by fixing tings and giving them more, therefor the mentality becomes, “I can add this one last fix tonight and John Doe will love it!”  This is a good quality to posses, but it is dangerous if not tempered.

There is one more important piece to this puzzel that I have not yet mentioned.  Defining the items to be worked.  This is a two-fold process.  First bugs and features must be combined into a prioritized list.  The driving foce behind this list needs to be customer service since they are on the front-lines talking to the people using the system every day.  By the way, customer service should also be heavily involved in the second QA portion of the cycle.  Second, management budgets a certain amount of time/money for the product.  This determines how far down the priority list we think we are going to on this cycle.  Note that the dates of the 20th and the 24th may decrease or increase that number.

The point is sticking to your guns and being honest about the drop-off dates.  We have learned that as we add features and fix bugs in an attempt to make people happy at the last minute, those items do not get the attention necessary in QA to ensure a smooth release.  They will ususally result in an egg, and sometimes it’s an ostrich that is laying it.

The difficult part in those dates comes from the fact that we must say “no” to something, which means that we are effectually saying “no” to someone and that goes against our nature.  It is a necessary “evil”, if you will.  Saying no may dissapoint, however saying yes many times will cause many more people to become frustrated and unhappy.

Someone once told me, we are defined not by what we say yes to, but rather by what we say no to.  I believe that this is true and that it defines our product.  The more we say no, the better the product becomes with increased stabilization.  The features will get added, the bugs fixed, but at a pace that is managable and does not compromise the stability by introducing more bugs that were fixed.

If you have other ideas that work for you, please feel free to comment.  I do not believe that this system is perfect, but at the time it seems to be a fairly solid process.