Using an External Config File With log4net with ASP.NET 2.0 and IIS7

I started a new project recently and set about adding log4net to it.  I’d upgraded to a new Window 7 workstation over the past month whereas I’d previously been using XP, so the step up to IIS7 was exciting and and a bit anxious all at the same time.  My first hurdle so far has been using log4net.

I tend to be the type of person that likes to separate out the log4net configuration into it’s own file, usually log4net.config.  Setting up this new project; however, I started running into a problem I’d never seen before.  When trying to use the XmlConfiguratior to read my log4net.config, I would see this exception:

[SecurityException: Request for the permission of type ‘System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089’ failed.]

Searching for others with this problem has not yielded much success.  I’ve seen articles stating that it’s an issue with medium trust security (on m local workstation trust was set to full) to problems with some sort of breaking code within log4net itself.  Most of these articles I suspect were not using IIS7, but it was hard to tell.

I ended up digging into the log4net source code to find out what was happening.  The problem would occur on a line of code that tried to access the FullName property of a System.IO.FileInfo object.  That property threw the exception worked fine it I tried to access it from my web project, but once it got down into the log4net guts, it would not.

After a lot of frustration, I finally started looking elsewhere.  I ran across a comment on Stack Overflow that stated they used .xml instead of .config files.  I had dismissed it due since others claimed .config files where fine, but recalling what I’d seen there I decided to try it.  *Poof* it worked!

I didn’t really want to leave the configuration in a file with the .xml extension since that could easily be downloaded from a server.  The discovery told me that it was likely due to something IIS was doing with ASP.NET to hide files, but the odd thing was that it could read from the web.config, so I was a little perplexed.  I started digging into how IIS7 handles these types of protected files. 

IIS7 ManagerIt turns out that there’s a section in the hosting configuration that lists protected files under the name “requestFiltering”.  Hmm…that was an interesting sounding name.  Unfortunately all the entries were only by file extension, not by file name directly so there had to be something else.

Request FilteringI ended up in the IIS7 Manager application and found the Request Filtering area.  I began to poke around in there and discovered another tab called Hidden Segments which had an entry for the web.config!  I clicked the Add Hidden Segment link under Actions and added a new entry for log4net.config and viola, my application worked!

I know that most of the IIS7 configuration settings are stored in various config files, including the web.config, so I started looking in there and found this new element in the <system.webServer> section.

   1: <security>

   2:     <requestFiltering>

   3:         <hiddenSegments>

   4:             <add segment="log4net.config" />

   5:         </hiddenSegments>

   6:     </requestFiltering>

   7: </security>

I believe that including this section will allow the files to work.  I even changed my trust level in my dev environment to Medium and it still ran just fine.

There may be other things that can cause this sort of problem, but this fixed my example.  Hopefully this helps someone else out there as it was a bear to track down!

How I fixed Visual Studio crashing when opening XAML (WPF) files

For the second time, Visual Studio 2008 began crashing when double clicking on a XAML file.  In this case, the crash was spectacularly underwhelming.  No error message, blue screen or other indication of something horrible happing.  Visual Studio would simply go a way, or “blink out” as I’ve seen it referred to.

After much searching I was able to determine that the PowerCommands for Visual Studio 2008 extension was somehow involved in the problem.  I tried quite a few different solutions including adding a dependantAssembly reference in my Visual Studio configuration, doing a clean solution then rebuilding the project.  Up to that point the only thing that worked was uninstalling the extension.  Reinstalling afterwards would cause the problem to return.

Working without PowerCommands was an option I did not want to consider.  Many times each day I would use functions such as “Open Command Prompt” which fires up cmd.exe starting in the directory of the file or folder selected in the solution explorer or “Open Containing Folder” on a file, just to mention a couple.  It was looking like that was going to be the only solution, I was bummed.

I decided to check with the source a bit further.  I looked through their issue list and the top voted issue was VS 2008 crashes when adjusting toolbox items.  Reading the description did not give me a lot of hope as it dealt with a problem when the toolbox items were adjusted.  It also included an actual error which was lacking in my case.

Among the comments to the above issue, I found the suggestion to run the command “ngen /delete *” (without the quotes) from a Visual Studio 2008 command prompt.  Ngen is the native image generator which creates native images from managed assemblies and installs a native image cache on a computer, which is a reserved area of the global assembly cache.

Wow, that sounds important, and I’m suppose to delete * (star)?  Ok….I was desperate and I did it before investigating what it meant.  Dangerous, I know, I know, but as I said I was desperate!

It worked.   Nothing seemed to break…and it worked.  Ok, off we go!  Wait, time to investigate….just in case problems start cropping up later.

ngen-help Running ngen /? from the command prompt yielded no information, not a mention of the /delete switch.  In my experience, when a command line utility fails to mention an argument that clearly did something, this is an argument that should only be used with full knowledge of what it does.  Now I started to get nervous!

I did a quick Google Search and was lead to the MSDN documentation for ngen.  It seems that my little command deleted all native images in the native image cache.  Hmm….that sounds bad.  Fortunately, a little further down the page the following information was also present.

 

This made me feel a little better.  The assemblies were still there, simply not the native images for them.  Native images simply cause the applications or assemblies to start up a bit faster.  With today’s hardware, I don’t know that I’ll even notice a difference, or if I do, it will be worth it to keep PowerCommands and get rid of the problem.

If this behavior resurfaces, I will do a little more investigation by using the /show argument to find out what native images are installed.  Then I will started deleting them one at a time to find the culprit.  My guess is that one of the native images somehow became corrupted, or the configuration for it got messed up or something like that.  Perhaps finding the culprit will aid in the resolution of the issue that this wonderful extension is experiencing by, if the Google results are any indication, a fair number of developers.

Ready to Launch

At Smart Solutions, the company I work for, we’ve been working on a new product for quite some time now.  Our target has been a launch in February at the SMX West 2009 conference.  We’ve just announced it on our company blog.

Warning…Warning…The Coolest New Product of the Year Announced!

But you’ll have to be at SMX to see it!

Well, that’s not entirely true – you can also see it at www.pixelsilk.com; but you’re going to want to be at SMX to really get to know the geniuses behind this leading-edge…some would say “futuristic”… SEO-enabled Content Management System.

You can read the all the details here.  It’s an exciting time and very fulfilling to see the the results of our labor bear fruit.  Congratulations to the entire team!

JavaScript == English

I was born and raised in the USA and as such, the English language comes naturally to me.  Sure, I see the idiosyncrasies such as 3 words spelled 3 different ways sounding exactly the same (their they’re there), 1 word with multiple meanings (rich, row,  tear) or a word spelled the same, pronounced differently and meaning something different (lead, bow, ..) when that is done.  Due to these traits, it has been said that English is one of the, if not the, toughest languages to really master in the world if it’s not your native language. (Note: this is a generalization and every situation is different)  The basics are easy, but to really grasp the language takes real  immersion into the culture.

JavaScript feels the same to me.  If your native programming language uses similar concepts (Lisp, Scheme) then you’re probably good, but if you are among the majority coming from a C based language (C++, C#, Java..) you will likely have problems.  Sure, the basics are easy, it looks like your C based language, but mastering it can take a lot of time immersing yourself in its culture.

The question becomes this:  In today’s web environment, is it worth it to truly master the language?  The creator of JavaScript, Brendan Eich, casts his own doubts regarding his creations future:

I don’t really believe ES4 is a demon from the ancient world, of course. I’m afraid the JS-Hobbits are in trouble, though. As things stand today, Silverlight with C# or something akin (pushed via Windows Update) will raze the Shire-web, in spite of Gandalf-crock’s teachings.

Of course the Shire-web he refers to is the current status quo, and Gandalf is clearly Doug Crockford, one of (or perhaps THE) top authority on JavaScript.

My own experience have just recently brought me to an understanding of JavaScript that makes me feel like I finally really get it.

I feel like the <pick your Latin language based country here> who learned enough English when I was young to ask how much my dinner cost, where the bathroom is and to call a cab to get back to my hotel.  Finally, moving to the USA, living, working and playing with native English speakers I get the mastery of the language.

Having made that analogy, albeit a stretched one, I think I now have enough information to say, it depends.

What?  It depends?  I’ve just read this entire piece of junk for a non-answer!?!?  Well, yes.  Nothing in this life is black and white, or at least very few things are.  This includes JavaScript!

I really do think that JS’s multi-paradigm nature means there is no one-true-subset for all to use (whether they like it or not)

I feel JavaScript will of course be around for a long while.  While the web does need an overhaul, there are too many people and pages invested in JavaScript to make some grandiose statement that Silverlight, Flash, Air or whatever will replace it.  These new, powerful and exciting technologies have found ways to work with JavaScript and it is my belief that the world (wide web) will be a better place for it.

I also believe JavaScript will evolve, kicking and screaming if necessary.  I believe it will become more powerful for the programmer.  Of course, the problem with evolving JS is browser support and browser saturation.  The platforms must support it and people must upgrade.  That is the one advantage the other technologies have, at least for the moment.

If you feel you do not have the level of mastery you should have with JavaScript, I suggest immersing yourself with Doug Crockford’s writings.  It’s a great place to start.  Use jQuery as well since a lot of the examples you find force you to use patterns you may not have already investigated.  As with all things, practice and perseverance are the key!

Good luck on your journey, Grasshopper!

My Adventures Installing mono 2.0 on CentOS 4 to work with apache via mod_mono

Apparently the good folks over at the mono project decided to discontinue binary packages for the Red Hat line of linux distributions.  It’s a shame in a way, there are a lot of those installation out there, so it would be nice to keep things updated through yum or apt-install or rhupdate, etc..

On the up side, installing from source has never been easier.  In the past I have went through many hours of trying to get the right versions of different libraries that were needed.  With the official release of 2.0 it seems much better.  I thought I would share the steps that I went through.

Disclaimer: This worked on a fairly fresh install of CentOS 4.7.  I have not tried it on 5.x, nor on any other flavor of linux (SUSE, Ubuntu, etc..) so your mileage may vary.

At the time of the install (and this writing) the current mono stable version is 2.0.1 so all references will be to that version.  Here are the steps that I went through.

Preparation

Always be prepared – Boy Scouts motto…

In rooting around the web I did find a few helpful pointers.  First, make sure you have gcc installed.  Now this is one of those duh pieces of information, but in the fairness of completeness I thought I would mention it. (Note: If you do not have gcc or bison, install them! Credit The_Assimilator’s comment)

# yum install gcc-c++
# yum install bison
Next I installed the httpd-devel package.  I had read (will find the link later) that it helps some of the installation down the line.  In my case I just use yum to install it. (Note:  httpd-devel package is required by the mod_mono compile if apxs (Apache Extension Tool) is not on your machine. credit to The_Assimilator’s comment)
yum install httpd-devel

You may also require the glib-2.0 libraries (thanks to Michael Walsh for that bit).  If you receive the error “Package glib-2.0 was not found in the pkg-config search path” you can install it via yum as well.

yum install glib2-devel

The Main Dance

Next comes the meat of the installation.  First, I downloaded the necessary source packages.  I simply used wget to snag the core mono package, xsp (mono web server) and mod_mono (apache integration).

wget http://ftp.novell.com/pub/mono/sources/mono/mono-2.0.1.tar.bz2
wget http://ftp.novell.com/pub/mono/sources/xsp/xsp-2.0.tar.bz2
wget http://ftp.novell.com/pub/mono/sources/mod_mono/mod_mono-2.0.tar.bz2

Next we install the mono core

tar -vxjf mono-2.0.1.tar.bz2
cd mono-2.0.1
./configure
make
make install
cd ..

Next comes xsp

tar -vxjf xsp-2.0.tar.bz
cd xsp-2.0
./configure
make
make install
cd ..

At this point I recevied an error (I believe it was in the make process) that the compiler could not find the file dotnet.pc.  I found that it was indeed on my system so I simply had to export the path and then finsih the compile.

export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
make
make install
cd..

Note: Make sure the file dotnet.pc is in that location.  If not, adjust the path above.

Finally we install mod_mono

tar -vxjf mod_mono-2.0.tar.bz2
cd mod_mono-2.0
./configure
make
make install

There, easy huh?

Configuration

You may want to verify a few thigns to make sure the configuration is ready to rock.  In my case, I am keeping the mono configuration in a separate file for sanity sake.  You can do that or put it all in your httpd.conf, it’s up to you.

<IfModule !mod_mono.c>
    LoadModule mono_module /usr/lib/httpd/modules/mod_mono.so
    AddType application/x-asp-net .aspx
    AddType application/x-asp-net .asmx
    AddType application/x-asp-net .ashx
    AddType application/x-asp-net .asax
    AddType application/x-asp-net .ascx
    AddType application/x-asp-net .soap
    AddType application/x-asp-net .rem
    AddType application/x-asp-net .axd
    AddType application/x-asp-net .cs
    AddType application/x-asp-net .config
    AddType application/x-asp-net .Config
    AddType application/x-asp-net .dll
    DirectoryIndex index.aspx
    DirectoryIndex Default.aspx
    DirectoryIndex default.aspx
</IfModule>
That was it.  I hope that helps!

Visual Studio 2008 RTM

Microsoft shipped Visual Studio 2008 to manufacturing today. According to some it has become available for download on the MSDN subscriber site, but only the Team Suite edition. Since we do not have the “über” subscription (I believe ours is one step below that) we’ll have to wait for MS to make the professional edition available.

Update: I finally was able to start downloading.  Apparently MS decided to use a different distribution method (Akami?) to get over the initial rush, or at least that is my guess on the matter and it is purely my own speculation.  The download location was not the normal MSDN subscriber downloads area.  In addition, I had to install a new download manager and of course allow popups from msdn2.microsoft.com before I could get it started, but now it’s on is own merry way to my computer!  It’s coming down at a good clip too, ,about 570 KB/sec so if things remain as they are I should have it within a couple of hours.  😀

Good luck to anyone else, hopefully grabbing this one 🙂

Interfaces, Abstact Classes and Base Classes, oh my!

The discussion came up today, due to a comment of mine, about Interfaces. When should you use them? When should you use a simple base class? I, of course, gave my own opinion on the topic and my team discussed the current project environment to come to a decision on how to proceed.

Afterwards I began doing a bit more research on the subject. One of the articles I ran across was by Mahesh Singh. He put together a very nice overview entitled Abstract Class vs Interface. Essentially it comes down to this question; Will there be shared implementation among inherited classes? His suggestion is that in most cases it is better to use an abstract class. There are a few good comments as well. One reader gives you an exercise which may make things more clear and potentially swing you more towards the need for interfaces, but I’ll leave that up to you.

In the article Interface vs Abstract Class by Maor David he describes some of the differences between interfaces and abstract classes. This article does not try to answer the question of which is better so much as to inform.

It is my opinion that every situation is different. I do not believe either is inherently better than the other; rather, I believe that each is better in certain circumstances. That puts the onus of deciding which road to go down on the developer to decide which suites the condition.

The point that is perhaps more important is to realize that you won’t always get it right. If not, don’t be afraid to refactor! At the same time, don’t be afraid to look at it and say…meh…it’s good enough. The gain for refactoring to use x would not be worth the time it involved.