Tech Blech

Wednesday, August 14, 2013

Dropbox and SendSpace: useful and reliable file services

There are so many ways to move files around, and to back files up, that it can make one's head spin.  I work across several different computers in various locations.  I've ended up relying heavily on both the Dropbox and Sendspace file services.  Both are free for a limited file size and bandwidth, and both are worth paying for if you need to transfer or backup a lot of files.  I've now been a paying customer of both services for a couple of years, and I've found them both to be reliable and easy to use.  That said, they are not identical.  Dropbox is useful for backups and sharing regularly-used files across my many different work computers.  Sendspace is useful for transfering very large files among computers, either mine or belonging to other people.

Dropbox will store your files "in the cloud" (that is, on a server, or servers, somewhere on the internet).  You access the files via a special "Dropbox" folder on your computer, which is created by the Dropbox installer.  Dropbox will automatically copy any files you deposit in its folder up to its servers.  Then, you'll be able to reach those same files on any other computer where you've also installed Dropbox using the same logon account.  To get started on a given computer, you just download and install the Dropbox client, run it and logon to your Dropbox account in the client.  The client program manages your Dropbox folder.  It is very smart about syncronizing local and server files, and it gives a good visual indication of when the syncronizing is done.

Sendpace will allow you to transfer very large files.  I generally zip, or compress, one or more files into a single huge file before uploading to the sendspace server.  Sendspace is drop-dead simple to use.  You don't need any coaching from me--just go to the site and follow instructions.  When a file is too huge to move around by any other means, Sendspace can nearly always move a file between any two computers as long as they both have access to the internet. Sendspace's free service offers the sender a link to delete the file from its servers, but if the sender doesn't bother, the file will be deleted automatically after two weeks.

Paying for Sendspace will increase the maximum allowed file size from huge to humungous.  It will also allow you to keep files up on their servers indefinitely (as long as your account is paid up).

To send a file to someone else, you do have to give Sendspace their email address; Sendspace then emails the person a link which they can use to download the file.  As far as I can tell, Sendspace does not mine the emails and never spams any user or recipient of files.  That the service has remained spam-free is unusual these days, and that makes it one of my favorite internet companies right now, and one of the few I can recommend whole-heartedly.

Good services deserve good publicity; hence this blog entry.  Give them a try!  It won't cost you anything to try them out.  However, I don't recommend that you send any sensitive data using these (or any other "cloud-based") services.  There is never any guarantee that a third party company won't look inside your data.  Thankfully, though, the kinds of files I'm sending around are not likely to be desirable to anyone but me or my immediate co-workers, and they don't contain anyone's private information.

Friday, July 05, 2013

GoDaddy hassle

With SSI's breaking.
Healthy page example.
Yesterday, I received a trouble report about a Linux shared-host website which resides on GoDaddy servers.  To my horror, the site now looked like the page shown on the right, instead of the page shown below it.  It was pretty clear that server-side includes (SSI's) had stopped working, and since I had not updated the site in a couple of weeks and all had  been working well up until yesterday, it was also clearly because of something that GoDaddy had done to the server.   If nothing else, their server logbook ought to show what had been done, and so a developer like myself should be able to figure out how to adapt.  Bad enough that no advance notice had been given.

I immediately put in a call to GoDaddy technical support to find out what had happened and get some help figuring out how to get the site back to its former state of health.  After navigating numerous computerized phone menus and waiting on hold for about 15 minutes, I finally reached a human being, who immediately put me on hold and then disconnected the call.  This person did not call me back, so after a few more minutes, I put in a second call to GoDaddy support.  Same drill: after about 15 minutes, I got a person, who didn't know anything and put me on hold while he "contacted the server group".  After another 15 minutes, he returned to announce that I would have to fix the problem myself, as it was a scripting problem.  OK, I enquired, how shall I fix the problem?  My code hasn't changed.  And in the meantime, I had verified by extensive web searching that GoDaddy's forums had no help page showing how server-side includes ought to work.  Further, there were many entries in GoDaddy's forums within the past two weeks by other customers whose server-side includes had also stopped working.  "Sorry", the tech support guy said, "it's a scripting problem and we don't touch your code.  You'll have to fix it."

I was now waffling between disbelief and rage.  After spending another hour trying every wild suggestion, and everything I've ever encounted to get server-side includes working on Linux, I "patched" the problem for the short term by eliminating the SSI's altogether in the important pages of the website, so that my site once again had graphics and styling.

Returning the next day, fresh and rested, I was able to get server-side includes working again by making this small change:

    Bad directive:     <!–#include virtual=”./shared/insert.txt” –>

    Good directive:   <!–#include file=”./shared/insert.txt” –>

Really?  One word change fixed this?  And GoDaddy tech support is too stupid to tell this to customers?  Needless to say, I don't trust GoDaddy.  I fully expect now that the pages will stop working at any moment due to some unknown change in their server configuration.

And, I will never, ever again use GoDaddy for any service.  What arrogance these large corporations are capable of developing towards their own customer base.  I'm still aghast.  They could have kept my business so easily.  Stay away from GoDaddy.  The word "evil" comes to mind.

I will be moving all business from GoDaddy as soon as can be arranged.

Tuesday, November 27, 2012

Microsoft update kills web application via namespace collision

In my work at the Academy of Natural Sciences of Drexel University, I administer a database server and web server for the Phycology Section (on which runs a bunch of REST web services on .NET 3.5) .  Today after downloading a number of Windows updates on the web server, one of the web services (here's an example) was broken.  The updates I had downloaded are shown here:
After investigating, I noticed a long, messy compile error that had not existed before, on a page in the cache which I had not created.  I stared at the error and the page until I began to understand that perhaps there was a name collision on the word "Site".  I had long used a master page called Site.master, so I changed it to Site1.master and repeatedly recompiled until all pages dependent on this master page had been changed to use the new name--after which the problem disappeared.

So answer me this.  How come an update to something in .NET 4 breaks a web service running on .NET 3.5?   And furthermore, how could anyone at Microsoft be dumb enough to suddenly purloin a common name such as "Site"?  Probably many programmers around the world are cursing and going through the same discovery process as I write this.  Bad Microsoft!  Bad!  Bad!

Monday, September 10, 2012

Undeletable Files on NTFS File System

Maintaining a web server on Microsoft Windows Server 2008 has mostly been straightforward and not very time-consuming.  But recently I was confronted with a small but extremely annoying hassle, having to do with 3 files within the web server's area that could not be read, could not be deleted, and could not even be included within a .zip file.  It was this last issue that first got my attention; I had been accustomed to .zip up the entire wwwroot area from time to time to ship it off-disk as a backup.  This began to fail.

I knew right away that I had introduced the problem while attempting to administer permissions on Pmwiki, a third-party application written in PHP that was never intended to run on a Microsoft web server.  Permissions for the upload folder kept reverting so that uploading attachments to the wiki failed, and it was while wrestling with this periodically recurring conundrum that I shot myself in the foot by somehow removing my ownership of those three files.  To get around this boondoggle, I had made an entire new upload folder (from a backup), and just renamed the old "stuck" upload folder.

Then, the next time I tried my handy .zip-it-all-up backup trick, I got an error.  The error, of course, happened right at the end of a long .zip process, and the entire archive failed to be created.  I now had no off-disk backup happening, all due to these three "stuck" files, which I could see in the folder, but which I could neither read, delete, nor could I change permission on them.  How anyone could deal with such problems without Google, I cannot imagine.  Thank the universe for search engines.

Within minutes of beginning my search, I found this support page on Microsoft's site.  At the very bottom, I found what I needed, which was the "if all else fails" strategy, which Microsoft called "Combinations of Causes" (that gave me a chuckle).  This strategy required me to use a utility called "subinacl.exe" which the support page cited as being part of the Resource Kit.

Googling some more, I soon found that the Resource Kit was a very expensive (as in $300) book with tools on a disk in the back.  I wasn't going to buy it.  Then it occurred to me just to search for "subinacl.exe", and thankfully, I found that Microsoft had made it available as a download.  So I downloaded it.  But idiot that I am, I failed to note where it had installed itself (somewhere obscure, I promise you).  Had to uninstall it, and then reinstall, this time noting down the install location, which for those of you going through the same thing, I will state here was C:\Program Files\Windows Resource Kits\Tools\.

So then I took a deep breath, constructed the command line that I needed in Notepad, then opened a command window, browsed to the obscure install folder shown above, and carefully pasted my unwieldy command into the window, then (holding my breath), I hit return.  A frightening amount of techno babble appeared as the command executed.  After a few tries, I got it to succeed, though it still gave warnings.  I had to do this four different times, once for each file and then for the containing folder.  The model command line is:

subinacl /onlyfile "\\?\c:\path_to_problem_file" /setowner=domain\administrator /grant=domain\administrator=F

View this joyful output here:
command window

Altogether, the failures of Pmwiki and Microsoft file permissions have indeed cost me hours of hassle over the five years while I have managed the server.  This latest offense was just the crowning jewel.  Managing file uploads on IIS is a challenge at any time, as the web server really is leery of anyone writing into a file zone that it serves.  But this doggy PHP open source piece of software (Pmwiki) that was tested only on Linux is, in retrospect, hardly worth the effort.  I haven't tried installing other wiki software yet (no time for the learning curve!) but surely I hope there might be another one that works better than this when running on a Windows server.

Microsoft's support page did do the trick.  The problem shouldn't be possible anyway.  The big Administrator account should always be able to delete a file--what were they thinking?--but at least they provided the necessary utility to dig myself out of the pit. Still, I'm not exactly feeling kindly towards Microsoft at this moment.  Or towards Pmwiki.

Thursday, June 28, 2012

ERROR - The 'Microsoft.Jet.OLEDB.4.0' provider is not registered on the local machine.

I support a .NET application which reads a version table in its background database, which is Microsoft Access.  If the version is wrong, the application refuses to run.  Recently, one of the application users upgraded to a 64-bit machine, and the application began reporting that the backend database was "the wrong version", even though it was in fact the same database that had worked fine on a 32-bit machine.  After some debugging, I unearthed the following error message (which was being swallowed): ERROR - The 'Microsoft.Jet.OLEDB.4.0' provider is not registered on the local machine.

Thanks to Google and other bloggers, I learned that Visual Studio 2010 Professional compiles, by default, with an option called "Any CPU".  But after recompiling the application with the "x86" option (which is an alternative to "Any CPU"), the application began to work on either a 32-bit or 64-bit CPU.  Huh? 

Apparently, with the "Any CPU" option, .NET ran the application as 64-bit (because it happened to reside on a 64-bit machine at the time) and a mismatch occurred when it tried to read from the 32-bit Access database.   By forcing the compiler to be "x86", I forced the program to run as a 32-bit process even though it resides on a 64-bit machine, and no mismatch occurred.  

I did notice that the application starts a bit more slowly than it did on a 32-bit machine, but once running, it warms up and runs just fine.

Those compiler options are very poorly named.  And now when Office upgrades, someday, to 64-bits, I'll probably have to recompile again.  Thanks, Microsoft.

Saturday, December 03, 2011

How to write to the disk (make a writable folder) in ASP.NET

To do file uploads, some administration is always necessary on the server side. ASP.NET tries very hard to prevent users from writing anywhere on the server, so we have to take special steps if file upload is required. In particular, the IIS7 server will not let you both Execute and Write into the same folder. For the record, here are the steps for IIS7 on Windows Server 2008 (assuming you are system administrator on the server):

* the user creates an uploads folder and sends its file spec to the administrator
* the administrator opens the Internet Information Services (IIS) Manager, browses to that folder, and right clicks over it to change file permissions as follows for the NETWORK SERVICE account:
** remove Read and Execute permission
** add Write permission

Both permissions should be changed in one step, and that should be all that is necessary. But test the write and subsequent read carefully; if either does not work, delete the folder, create a new one, and start all over again.

If you are using a hosting service, there should be some special procedure to get the permissions changed. In my experience, changing the permissions for this purpose has spotty success and can lead to a drawn-out hassle. I suppose it's worth it to have a reasonably secure server.

Labels: , , ,

Tuesday, November 01, 2011

VS.NET 2010 fails to compile program created earlier in VS.NET 2008

Trying to recompile a program in Visual Studio 2010, which was originally created using Visual Studio 2008 (which is still on my PC), I got this baffling message:

Error 9 Task failed because "sgen.exe" was not found, or the correct Microsoft Windows SDK is not installed. The task is looking for "sgen.exe" in the "bin" subdirectory beneath the location specified in the InstallationFolder value of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SDKs\Windows\v6.0A. You may be able to solve the problem by doing one of the following: 1) Install the Microsoft Windows SDK for Windows Server 2008 and .NET Framework 3.5. 2) Install Visual Studio 2008. 3) Manually set the above registry key to the correct location. 4) Pass the correct location into the "ToolPath" parameter of the task.

These suggestions were too bizarre even to consider, and so I Googled. Right away, I found this nice blog entry which helped me out, and just in case it were to go away, I'm duplicating the helpful information it contains below. So, courtesy of the "dukelupus" blog, everything after this paragraph is copied verbatim from that blog.

Changing the registry key will not help nor will adding C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\ to the path. I did not try other solutions...I used FileMon to check what Visual Studio is looking for – and it appears that it will always look for that file at C:\WINDOWS\Microsoft.NET\Framework\v3.5\, which does not contain sgen.exe.

Just copy sgen.exe from C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\ to C:\WINDOWS\Microsoft.NET\Framework\v3.5\ and everything will now compile just fine. Here, to make your life easier, copy command:

copy /y “C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\sgen.exe” “C:\WINDOWS\Microsoft.NET\Framework\v3.5\”

Good luck!

Labels: , , ,

Wednesday, August 24, 2011

SQL Server Management Studio and the "Saving changes is not permitted" error.

Sometimes after a new install of Microsoft SQL Server Management Studio, you may get a default setting that prevents the user from changing the design of tables in the visual designer mode, which can be extremely frustrating, as it is not easy to figure out how to turn the checking off that is preventing table redesign. The error message will be announced in an annoying popup whose text is uncopyable:

"Saving changes is not permitted. The changes you have made require the folloing tables to be dropped and re-created. You have either made changes to a table that can't be re-created or enabled the option Prevent saving changes that require the table to be re-created."

Here's how to solve this in Microsoft SQL Server Management Studio 2008:

1) Go into the Tools...Options... menu

2) In the popup, on the left, expand "Designers" by clicking on the plus

3) In the "Table Options" shown to the right, MAKE SURE that "Prevent saving changes that require table re-creation" is NOT checked.

I've lost a few hours of my life to this nuisance. Hope this will help someone else out of the conundrum.

Labels: ,

Friday, June 10, 2011

ODBC workaround (Access to SQL Server) for 64-bit Windows 7

I've been avoiding 64-bit Windows due to various incompatibility rumors, but this case takes the cake, as it is entirely Microsoft's fault. My work place uses a variety of shared Access databases, located on a network drive, that connect via ODBC System DSN's to a SQL Server 2008.

Even though all DSN's appeared to be configured correctly on my colleague's brand new (64-bit) Windows 7 machine, and the ODBC connections passed their test, the actual database declined to connect to the server. Thanks to various discussion groups, we finally figured out that the graphical user interface accessible from the Administrative Tools applet in Control Panel actually brings up a 64-bit ODBC application, whereas we (for backwards compatibility) needed the strangely hidden 32-bit System DSN window. To run it, we had to browse to this path:


Clicking on odbcad32.exe runs the 32-bit version of the ODBC connection setter upper. There, we re-created all the System DSN's, and finally the Access databases were happy.


By default, the Windows GUI is presenting a 64-bit ODBC connection setter upper (which I believe is in the C:\Windows\system32 path somewhere. Going manually to the first path and running the application, then adding the ODBC connections, makes it work.

In the meantime, 3 days of work were lost to this problem.

Labels: , , , ,

Thursday, May 20, 2010

The challenge of printing a .NET textbox

On Yahoo Answers, someone asked how to print the content of a RichTextBox control in .NET. I did finally manage to print a (simpler) TextBox control in .NET, and that was difficult enough. I am documenting that here for myself or anyone else
facing this challenge.

First, a little rant. Before the .NET framework was released (~2000), Microsoft provided a handy .print method on the RichTextBox and TextBox controls, and all a programmer needed to do was call it. But in Microsoft's almighty wisdom (and trying to be just like Java), the simple and highly useful .print method was removed in .NET, and now you have to do all the following steps successfully. And note, it's just as difficult in Java--what were the language developers thinking? I imagine they were thinking to provide maximum flexibility, but why not provide a quick-and-dirty out for the rest of us?

In the example below, my form (called JobForm) is printing the contents of a TextBox control (called textBoxRight).


You need a bunch of special fields in your form code to keep track of printing information. To get started, just use this:

#region printing declarations

        private Font midlistRegular = new Font(
            "San Serif",

        private Font midlistRegular1 = new Font(
            "San Serif",

        private int printMargin = 1;

        private int lastPosition = 0;

        private int lastIndex = 0;

        /// Set in Paint (of form) for use when printing
        private float screenResolutionX = 0;

        #endregion printing declarations


You need a special object that raises an event each time one page has been printed and causes the next page to be printed. It is a non-visible control and is called "PrintDocument" (in library System.Drawing.Printing).

In the Windows Designer, drag a non-visible "PrintDocument" control onto your form (System.Drawing.Printing.PrintDocument). It will be instantiated on your form as "printDocument1". Double-click the PrintDocument control on the form to create the "PrintPage" event and give it the following code (using your TextBox name instead off "textBoxRight"):

private void printDocument1_PrintPage(object sender, PrintPageEventArgs e)
                e.HasMorePages = this.PrintOnePage(
            catch (Exception ex)


Now create the PrintOnePage() method needed by the above code. Although the display truncates this code, if you copy it using Control-C, all the code will be grabbed. Use the boilerplate code below unchanged (and I apologize that it's so ugly):

        /// Goes through each line of the text box and prints it
        private bool PrintOnePage(Graphics g, 
                 TextBox txtSurface, PrintDocument printer, 
                 float screenResolution)
            // Font textFont = txtSurface.Font;
            Font textFont = 
                   new Font("San serif", 
                            (float)10.0, FontStyle.Regular, 

            // go line by line and draw each string
            int startIndex = this.lastIndex;
            int index = txtSurface.Text.IndexOf("\n", startIndex);

            int nextPosition = (int)this.lastPosition;
            // just use the default string format
            StringFormat sf = new StringFormat();

            // sf.FormatFlags = StringFormatFlags.NoClip | (~StringFormatFlags.NoWrap );
            // get the page height
            int lastPagePosition = (int)(((printer.DefaultPageSettings.PaperSize.Height / 100.0f) - 1.0f) * (float)screenResolution);
            // int resolution = printer.DefaultPageSettings.PrinterResolution.X;

            // use the screen resolution for measuring the page
            int resolution = (int)screenResolution;

            // calculate the maximum width in inches from the default paper size and the margin
            int maxwidth =
                (int)((printer.DefaultPageSettings.PaperSize.Width / 100.0f - this.printMargin * 2) * resolution);

            // get the margin in inches
            int printMarginInPixels = resolution * this.printMargin + 6;
            Rectangle rtLayout = new Rectangle(0, 0, 0, 0);
            int lineheight = 0;

            while (index != -1)
                string nextLine = txtSurface.Text.Substring(startIndex, index - startIndex);
                lineheight = (int)(g.MeasureString(nextLine, textFont, maxwidth, sf).Height);
                rtLayout = new Rectangle(printMarginInPixels, nextPosition, maxwidth, lineheight);
                g.DrawString(nextLine, textFont, Brushes.Black, rtLayout, sf);

                nextPosition += (int)(lineheight + 3);
                startIndex = index + 1;
                index = txtSurface.Text.IndexOf("\n", startIndex);
                if (nextPosition > lastPagePosition)
                    this.lastPosition = (int)screenResolution;
                    this.lastIndex = index;
                    return true; // reached end of page

            // draw the last line
            string lastLine = txtSurface.Text.Substring(startIndex);
            lineheight = (int)(g.MeasureString(lastLine, textFont, maxwidth, sf).Height);
            rtLayout = new Rectangle(printMarginInPixels, nextPosition, maxwidth, lineheight);
            g.DrawString(lastLine, textFont, Brushes.Black, rtLayout, sf);

            this.lastPosition = (int)screenResolution;
            this.lastIndex = 0;
            return false;


In Windows Designer, open your form in graphical view mode. Open the form's Properties Window and click the lightning bolt to see events. Double-click on the form's Paint event to create it, and paste the boiler-plate code from my Paint event below into your form's paint event (your event will have a different name, using your form's name, than mine does below):

private void JobForm_Paint(object sender,
                            System.Windows.Forms.PaintEventArgs e)
            // save the form height here
            this.screenResolutionX = e.Graphics.DpiX;

            // set the last position of the text box
            this.lastPosition = (int)this.screenResolutionX;


To actually print the contents of the TextBox, you'll need code like this in a print menu or button event:

PrintDialog printDialog1 = null;
 printDialog1 = new PrintDialog();
        if (printDialog1.ShowDialog() == DialogResult.OK)
             this.printDocument1.PrinterSettings = printDialog1.PrinterSettings;

I haven't paid huge attention to all the above code, once I got it working. I snarfed much of it from various sources on the web (thank you, bloggers!). It could be enhanced or cleaned up a lot. Maybe this will help another programmer get it done.

Labels: ,

Thursday, March 04, 2010

Wrestling to sort the ASP.NET GridView

It's supposed to be easy, and practically codeless to use, and it is--sometimes. When a GridView is to be populated with the same dataset every time the page loads. But I had a drop-down list where a condition had to be selected, and based on that, the grid then had to be populated. I made it to the point where I got the dropdown selecting, and the grid populating, but there were 2 problems: paging didn't work, and sorting didn't work. I decided to turn paging off, so sorting was my last remaining issue.

Umpteen useless web articles later, I resorted to the paper books stashed on my shelf at home. First stop was Murach's ASP.NET 2.0, which is alleged to be so good. But it held no love for me. Second stop was Dino Esposito's "Programming Microsoft ASP.NET 2.0: Core Reference"--and finally, I got the help I needed.

I'm blogging about this because a good book deserves real credit. Many mysteries were unraveled by the Esposito book, including that I needed to explicitly re-populate the GridView when its "Sorting" event fired. Esposito's directions were extremely explicit: use the "Sorting" event's GridViewArgEvents ("e" parameter) to find out the sort key, and write a special stored procedure that uses the sort key to ORDER the data differently. These last bits of information were the treasure that finally allowed me to get sorting to work.

I'm posting a copy of the rather odd-looking stored procedure that I ended up using below for your edification. The "@order_by" parameter names the column on which to sort, and the odd way of constructing the query from strings allows brackets to fit around any strange or keyword column names:

CREATE PROCEDURE [dbo].[lsp_NRSA_find_counts_and_person_by_naded] 
@naded_id int,
@order_by nvarchar(12)


 IF @order_by = ''
  SET @order_by = 'slide'
 EXEC ('SELECT * ' + 
 'FROM vw_NRSA_cemaats_by_count_and_person ' +
 'WHERE naded = ' + @naded_id + 
 ' ORDER BY [' + @order_by + ']')


Labels: , ,

Tuesday, November 10, 2009

Sql Server Management Studio 2008 logon persistence problem

I very often open SQL Management Studio 2008 for the same server. Problem is, the very first time I accessed this server, I did so using a different user name and password. Forever after henceforth, SQL Management Studio refused to remove this old server entry from my dropdown list in the "Connect to Server" popup window.

This made it necessary to click to change the user and enter a new password about 80 times per day--all because this premium, allegedly very smart sophisticated software has decided never to forget a former entry--even though I have deleted the Registered Server and recreated and so forth. I'm frankly angry about this nuisance, which wouldn't matter if I didn't have to open and close the tool so many times per day.

If anyone has any concrete suggestion on how I can defeat this thing, please let me know. I see from the internet that I am not the only one having this frustration, but I have yet to find any blog entry or suggestion to alleviate the problem.

UPDATED with an answer in Aug. 2010:

For Sql Server Management Studio 2008 on Windows XP, to restart all your logins, delete the file:

C:\Documents and Settings\%username%\AppData\Roaming\Microsoft\Microsoft SQL Server\100\Tools\Shell\SqlStudio.bin

or possibly:

C:\Documents and Settings\[user]\Application Data\Microsoft\Microsoft SQL Server\100\Tools\Shell


For Sql Server Management Studio 2008 on Windows 7, to restart all your logins, delete the file:

C:\Users\%username%\AppData\Roaming\Microsoft\Microsoft SQL Server\100\Tools\Shell\SqlStudio.bin

Labels: , ,

Monday, September 28, 2009

Delete a maintenance plan after changing SQL Server 2008 name

I've seen a few posts on how to delete an older maintenance plan after you've changed the computer name for a default installation of SQL Server 2008, but none of the posts fully solved the problem. Below is what I had to do--with the caveat that you have to find your own id's:

USE msdb

FROM dbo.sysmaintplan_log 
WHERE subplan_id = '36F8247F-6A1E-427A-AB7D-2F6D972E32C1'

FROM dbo.sysmaintplan_subplans 
WHERE subplan_id = '36F8247F-6A1E-427A-AB7D-2F6D972E32C1'

FROM dbo.sysjobs 
WHERE job_id = '3757937A-02DB-47A6-90DA-A64AE84D6E98'

FROM dbo.sysmaintplan_plans 
WHERE id = 'C7C6EFAA-DA4D-4097-9F9F-FC3A7C0AF2DB'

Labels: ,

Sunday, May 03, 2009

C# code to get schema of an Access table

I just wanted to dump the schema of a table in Microsoft Access. There is a lot of code on the web which purports to do this, but most of it didn't actually work, and most of it was not in C# (my current preferred language). So I am posting this here for anyone that needs it. Just change the path to the database file in the first line of code, and the name of the table ("taxa" in my example) in the second line of code. This program dumps the table schema to a file created by the LogFile object (also attached below) located in the application folder. You'll need to add "using System.Data.OleDb;" to the top of the file. Or, just download the code here. The code:

OleDbConnection conn =
 new OleDbConnection(
    "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" +
    "C:\\phycoaide\\phycoaide.mdb;Persist Security Info=False;");

// retrieving schema for a single table
OleDbCommand cmd = new OleDbCommand("taxa", conn);
cmd.CommandType = CommandType.TableDirect;
OleDbDataReader reader =
DataTable schemaTable = reader.GetSchemaTable();

LogFile.WriteLine(" ");
foreach (DataRow r in schemaTable.Rows)
 LogFile.WriteLine(" ");
 foreach (DataColumn c in schemaTable.Columns)
    LogFile.WriteLine(c.ColumnName + ": " + r[c.ColumnName]);

The LogFile class creates a file in the application folder from which your program runs:

// No copyright; free for reuse by anyone
// Pat G. Palmer
// ppalmer AT
// 2009-05-04
// Opens a single-threaded log file to trace execution
namespace Ansp
    using System;
    using System.IO;            // file readers and writers
    using System.Windows.Forms; // Application object

    /// Singleton that appends log entries to a text file
    /// in the application folder.  If the file grows
    /// to be too large, it deletes itself and starts over.
    /// The file is kept open until the application ends
    /// and implements the "dispose" pattern in case things
    /// do not end gracefully.
    public class LogFile : IDisposable
        private static int maxsize = 470000;
        private static string fileSuffix = "_log.txt";
        private static string fileSpecification;
        private static StreamWriter filewriter;
        private static LogFile instance;

        private LogFile()


        public static void InitLogFile()
            if (instance == null)
                instance = new LogFile();

            string stringMe = "InitLogFile: ";
                if (Application.ProductName.Length == 0)
                    fileSpecification = Application.StartupPath + "\\" +
                       "Test" + fileSuffix;
                    fileSpecification = Application.StartupPath + "\\" +
                       Application.ProductName + fileSuffix;

                // restart file if too big
                if (File.Exists(fileSpecification))
                    FileInfo myFileInfo = new FileInfo(fileSpecification);
                    if (myFileInfo.Length > maxsize)

                    myFileInfo = null;

                // restart file with appending
                filewriter = new StreamWriter(
                   fileSpecification, true, System.Text.Encoding.UTF8);

                // start log with standard info
                string tempString = stringMe +
                    Application.ProductName + " " +
                    Application.ProductVersion +
                    "log opened at " + 
                WriteLine(stringMe + "username=" + SystemInformation.UserName);
                WriteLine(stringMe + Application.StartupPath);

        public static void WriteLine(string myInputLine)
                if (instance == null)
                   InitLogFile(); // first time only
                if (myInputLine.Length != 0)
                   filewriter.Flush(); // update file

        public static void Close()

        /// Implement IDisposable.
        /// Do not make this method virtual.
        /// A derived class must not override this method.
        public void Dispose()
            //// Now, we call GC.SupressFinalize to take this object
            //// off the finalization queue and prevent finalization
            //// code for this object from executing a second time.

        private void Dispose(bool disposing)
            if (disposing)
                // no managed resources to clean up
            if (instance != null)
                if (filewriter != null)

                    filewriter = null;
                } // end if filewriter not null
            } // end if instance not null

    } // end class LogFile()
} // end namespace

Labels: ,

Thursday, December 27, 2007

Suppressing VS.NET Compiler Warnings (2005, 2008)

"1591" was the magic string that I needed, and this is the sorry tale of how to find that out.

As a rule, I don't like suppressing compiler warnings, but there is a time for everything. My time came when I inherited a huge mass of ill-behaving C# code and began adding XML comments. I was immediately overwhelmed by hundreds of warnings that said "Missing XML comment for publicly visible type". I wanted to compile the comments that I had added without being nagged by the compiler for not having added the remaining 300 possible XML comments as well.

I knew that Visual Studio 2005 would let me suppress specific warnings in the project build properties. However, I didn't know the warning number that I needed to supply. Microsoft, in their great goodness, has suppressed showing of warning numbers--they only show the text. A few googles later, I knew that it was either 1591 or CS1591, but no one told me anywhere, in general, now to find the full list of warning numbers. I've wanted this list many a time in the past, so I set out to find out, once and for all.

Eventually, I found that I needed to start at the top-level C# Reference page in MSDN2 (for the appropriate version of VS.NET), then search on "compiler warning " + "warning text". So searching on "compiler warning missing XML comment" got me the precious warning number that I needed, which is CS1591. But then I had to psychically understand, of course, that the CS must be left off, and only the 1591 entered.

See my glorious build screen which finally suppressed the evil hundreds of unwanted warnings:


UPDATE in Oct 2008: Now that I am using Visual Studio 2008, I have learned that I can right-click over a warning in the Error List pane, and it will pop up documentation about the warning that includes its error level and number, and from that, I can derive the 4 digits to place in the suppress box of the project Build properties. It is not necessary to search on the Microsoft website. I don't know if this feature was present in Visual Studio 2005 (and I just didn't know it), or not.

Labels: ,