Tech Blech

Wednesday, August 14, 2013

Dropbox and SendSpace: useful and reliable file services

There are so many ways to move files around, and to back files up, that it can make one's head spin.  I work across several different computers in various locations.  I've ended up relying heavily on both the Dropbox and Sendspace file services.  Both are free for a limited file size and bandwidth, and both are worth paying for if you need to transfer or backup a lot of files.  I've now been a paying customer of both services for a couple of years, and I've found them both to be reliable and easy to use.  That said, they are not identical.  Dropbox is useful for backups and sharing regularly-used files across my many different work computers.  Sendspace is useful for transfering very large files among computers, either mine or belonging to other people.

Dropbox will store your files "in the cloud" (that is, on a server, or servers, somewhere on the internet).  You access the files via a special "Dropbox" folder on your computer, which is created by the Dropbox installer.  Dropbox will automatically copy any files you deposit in its folder up to its servers.  Then, you'll be able to reach those same files on any other computer where you've also installed Dropbox using the same logon account.  To get started on a given computer, you just download and install the Dropbox client, run it and logon to your Dropbox account in the client.  The client program manages your Dropbox folder.  It is very smart about syncronizing local and server files, and it gives a good visual indication of when the syncronizing is done.

Sendpace will allow you to transfer very large files.  I generally zip, or compress, one or more files into a single huge file before uploading to the sendspace server.  Sendspace is drop-dead simple to use.  You don't need any coaching from me--just go to the site and follow instructions.  When a file is too huge to move around by any other means, Sendspace can nearly always move a file between any two computers as long as they both have access to the internet. Sendspace's free service offers the sender a link to delete the file from its servers, but if the sender doesn't bother, the file will be deleted automatically after two weeks.

Paying for Sendspace will increase the maximum allowed file size from huge to humungous.  It will also allow you to keep files up on their servers indefinitely (as long as your account is paid up).

To send a file to someone else, you do have to give Sendspace their email address; Sendspace then emails the person a link which they can use to download the file.  As far as I can tell, Sendspace does not mine the emails and never spams any user or recipient of files.  That the service has remained spam-free is unusual these days, and that makes it one of my favorite internet companies right now, and one of the few I can recommend whole-heartedly.

Good services deserve good publicity; hence this blog entry.  Give them a try!  It won't cost you anything to try them out.  However, I don't recommend that you send any sensitive data using these (or any other "cloud-based") services.  There is never any guarantee that a third party company won't look inside your data.  Thankfully, though, the kinds of files I'm sending around are not likely to be desirable to anyone but me or my immediate co-workers, and they don't contain anyone's private information.

Friday, July 05, 2013

GoDaddy is evil

With SSI's breaking.
Healthy page example.
Yesterday, I received a trouble report about a Linux shared-host website which resides on GoDaddy servers.  To my horror, the site now looked like the page shown on the right, instead of the page shown below it.  It was pretty clear that server-side includes (SSI's) had stopped working, and since I had not updated the site in a couple of weeks and all had  been working well up until yesterday, it was also clearly because of something that GoDaddy had done to the server.   If nothing else, their server logbook ought to show what had been done, and so a developer like myself should be able to figure out how to adapt.  Bad enough that no advance notice had been given.

I immediately put in a call to GoDaddy technical support to find out what had happened and get some help figuring out how to get the site back to its former state of health.  After navigating numerous computerized phone menus and waiting on hold for about 15 minutes, I finally reached a human being, who immediately put me on hold and then disconnected the call.  This person did not call me back, so after a few more minutes, I put in a second call to GoDaddy support.  Same drill: after about 15 minutes, I got a person, who didn't know anything and put me on hold while he "contacted the server group".  After another 15 minutes, he returned to announce that I would have to fix the problem myself, as it was a scripting problem.  OK, I enquired, how shall I fix the problem?  My code hasn't changed.  And in the meantime, I had verified by extensive web searching that GoDaddy's forums had no help page showing how server-side includes ought to work.  Further, there were many entries in GoDaddy's forums within the past two weeks by other customers whose server-side includes had also stopped working.  "Sorry", the tech support guy said, "it's a scripting problem and we don't touch your code.  You'll have to fix it."

I was now waffling between disbelief and rage.  After spending another hour trying every wild suggestion, and everything I've ever encounted to get server-side includes working on Linux, I "patched" the problem for the short term by eliminating the SSI's altogether in the important pages of the website, so that my site once again had graphics and styling.

Returning the next day, fresh and rested, I was able to get server-side includes working again by making this small change:


    Bad directive:     <!–#include virtual=”./shared/insert.txt” –>

    Good directive:   <!–#include file=”./shared/insert.txt” –>

Really?  One word change fixed this?  And GoDaddy tech support is too stupid to tell this to customers?  Needless to say, I don't trust GoDaddy.  I fully expect now that the pages will stop working at any moment due to some unknown change in their server configuration.

And, I will never, ever again use GoDaddy for any service.  What arrogance these large corporations are capable of developing towards their own customer base.  I'm still aghast.  They could have kept my business so easily.  Stay away from GoDaddy.  The word "evil" comes to mind.

I will be moving all business from GoDaddy as soon as can be arranged.

Wednesday, May 08, 2013

Google tracks clicks on its search links even if browser cookies disabled

Notice the status (lower left) while mousing in a Google search.  The address shown in the status bar is a total lie.

The two links below go to the same place.    I got the first link by selecting "Copy Link Location" over a search result from Bing, and the the second link by selecting "Copy Link Location" over a search result from Google.  


When you click a Google Search result, it takes you first to Google's web service (the longer link below), where Google records what you clicked (and who knows what else about you) before forwarding to the page you actually wanted.

I recommend that everyone switch away from Google search, since there are now good alternatives.

http://nebula.nasa.gov/blog/2011/08/16/white-paper-nebula-action/

http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CDMQFjAA&url=http%3A%2F%2Fnebula.nasa.gov%2Fblog%2F2011%2F08%2F16%2Fwhite-paper-nebula-action%2F&ei=uUeKUdKFIYjq0QGKmID4AQ&usg=AFQjCNHmbke5CQSlmqcgQuRmTlLIChVu3Q&sig2=nYGdEhP3nma2hoeao1h0oQ&bvm=bv.46226182,d.dmQ&cad=rja


I first noticed this a few months ago while trying to fix up old broken links on some websites, and I only noticed it because the Google web service slowed down during peek hours in the middle of the daytime.  Google has coaxed the browser to show the false, shorter link in the status bar below the browser when you mouse over the longer link (using Javascript, I presume).  It's only a matter of time until they figure out how to make "Copy Link Location" lie to us as well, after which only packet sniffing would be able to catch them in the act.

I really don't understand why the tech community is not raising hell over this--it is yet another invasion into our privacy.  Even running a browser in private browsing mode would not save us from this kind of rigging--which, my guess is, could be why Google did this.  And also to more accurately charge their ad customers for click-throughs.  Either way, I don't like it.   Google's "Don't Be Evil" slogan is beginning to look a little ironic, and I've switched to Bing for the majority of my web searches.  

Tuesday, November 27, 2012

Microsoft update kills web application via namespace collision

In my work at the Academy of Natural Sciences of Drexel University, I administer a database server and web server for the Phycology Section (on which runs a bunch of REST web services on .NET 3.5) .  Today after downloading a number of Windows updates on the web server, one of the web services (here's an example) was broken.  The updates I had downloaded are shown here:
After investigating, I noticed a long, messy compile error that had not existed before, on a page in the cache which I had not created.  I stared at the error and the page until I began to understand that perhaps there was a name collision on the word "Site".  I had long used a master page called Site.master, so I changed it to Site1.master and repeatedly recompiled until all pages dependent on this master page had been changed to use the new name--after which the problem disappeared.

So answer me this.  How come an update to something in .NET 4 breaks a web service running on .NET 3.5?   And furthermore, how could anyone at Microsoft be dumb enough to suddenly purloin a common name such as "Site"?  Probably many programmers around the world are cursing and going through the same discovery process as I write this.  Bad Microsoft!  Bad!  Bad!

Monday, September 10, 2012

Undeletable Files on NTFS File System

Maintaining a web server on Microsoft Windows Server 2008 has mostly been straightforward and not very time-consuming.  But recently I was confronted with a small but extremely annoying hassle, having to do with 3 files within the web server's area that could not be read, could not be deleted, and could not even be included within a .zip file.  It was this last issue that first got my attention; I had been accustomed to .zip up the entire wwwroot area from time to time to ship it off-disk as a backup.  This began to fail.

I knew right away that I had introduced the problem while attempting to administer permissions on Pmwiki, a third-party application written in PHP that was never intended to run on a Microsoft web server.  Permissions for the upload folder kept reverting so that uploading attachments to the wiki failed, and it was while wrestling with this periodically recurring conundrum that I shot myself in the foot by somehow removing my ownership of those three files.  To get around this boondoggle, I had made an entire new upload folder (from a backup), and just renamed the old "stuck" upload folder.

Then, the next time I tried my handy .zip-it-all-up backup trick, I got an error.  The error, of course, happened right at the end of a long .zip process, and the entire archive failed to be created.  I now had no off-disk backup happening, all due to these three "stuck" files, which I could see in the folder, but which I could neither read, delete, nor could I change permission on them.  How anyone could deal with such problems without Google, I cannot imagine.  Thank the universe for search engines.

Within minutes of beginning my search, I found this support page on Microsoft's site.  At the very bottom, I found what I needed, which was the "if all else fails" strategy, which Microsoft called "Combinations of Causes" (that gave me a chuckle).  This strategy required me to use a utility called "subinacl.exe" which the support page cited as being part of the Resource Kit.

Googling some more, I soon found that the Resource Kit was a very expensive (as in $300) book with tools on a disk in the back.  I wasn't going to buy it.  Then it occurred to me just to search for "subinacl.exe", and thankfully, I found that Microsoft had made it available as a download.  So I downloaded it.  But idiot that I am, I failed to note where it had installed itself (somewhere obscure, I promise you).  Had to uninstall it, and then reinstall, this time noting down the install location, which for those of you going through the same thing, I will state here was C:\Program Files\Windows Resource Kits\Tools\.

So then I took a deep breath, constructed the command line that I needed in Notepad, then opened a command window, browsed to the obscure install folder shown above, and carefully pasted my unwieldy command into the window, then (holding my breath), I hit return.  A frightening amount of techno babble appeared as the command executed.  After a few tries, I got it to succeed, though it still gave warnings.  I had to do this four different times, once for each file and then for the containing folder.  The model command line is:


subinacl /onlyfile "\\?\c:\path_to_problem_file" /setowner=domain\administrator /grant=domain\administrator=F

View this joyful output here:
command window

Altogether, the failures of Pmwiki and Microsoft file permissions have indeed cost me hours of hassle over the five years while I have managed the server.  This latest offense was just the crowning jewel.  Managing file uploads on IIS is a challenge at any time, as the web server really is leery of anyone writing into a file zone that it serves.  But this doggy PHP open source piece of software (Pmwiki) that was tested only on Linux is, in retrospect, hardly worth the effort.  I haven't tried installing other wiki software yet (no time for the learning curve!) but surely I hope there might be another one that works better than this when running on a Windows server.

Microsoft's support page did do the trick.  The problem shouldn't be possible anyway.  The big Administrator account should always be able to delete a file--what were they thinking?--but at least they provided the necessary utility to dig myself out of the pit. Still, I'm not exactly feeling kindly towards Microsoft at this moment.  Or towards Pmwiki.

Thursday, June 28, 2012

ERROR - The 'Microsoft.Jet.OLEDB.4.0' provider is not registered on the local machine.

I support a .NET application which reads a version table in its background database, which is Microsoft Access.  If the version is wrong, the application refuses to run.  Recently, one of the application users upgraded to a 64-bit machine, and the application began reporting that the backend database was "the wrong version", even though it was in fact the same database that had worked fine on a 32-bit machine.  After some debugging, I unearthed the following error message (which was being swallowed): ERROR - The 'Microsoft.Jet.OLEDB.4.0' provider is not registered on the local machine.

Thanks to Google and other bloggers, I learned that Visual Studio 2010 Professional compiles, by default, with an option called "Any CPU".  But after recompiling the application with the "x86" option (which is an alternative to "Any CPU"), the application began to work on either a 32-bit or 64-bit CPU.  Huh? 

Apparently, with the "Any CPU" option, .NET ran the application as 64-bit (because it happened to reside on a 64-bit machine at the time) and a mismatch occurred when it tried to read from the 32-bit Access database.   By forcing the compiler to be "x86", I forced the program to run as a 32-bit process even though it resides on a 64-bit machine, and no mismatch occurred.  

I did notice that the application starts a bit more slowly than it did on a 32-bit machine, but once running, it warms up and runs just fine.

Those compiler options are very poorly named.  And now when Office upgrades, someday, to 64-bits, I'll probably have to recompile again.  Thanks, Microsoft.


Saturday, December 03, 2011

How to write to the disk (make a writable folder) in ASP.NET

To do file uploads, some administration is always necessary on the server side. ASP.NET tries very hard to prevent users from writing anywhere on the server, so we have to take special steps if file upload is required. In particular, the IIS7 server will not let you both Execute and Write into the same folder. For the record, here are the steps for IIS7 on Windows Server 2008 (assuming you are system administrator on the server):

* the user creates an uploads folder and sends its file spec to the administrator
* the administrator opens the Internet Information Services (IIS) Manager, browses to that folder, and right clicks over it to change file permissions as follows for the NETWORK SERVICE account:
** remove Read and Execute permission
** add Write permission

Both permissions should be changed in one step, and that should be all that is necessary. But test the write and subsequent read carefully; if either does not work, delete the folder, create a new one, and start all over again.

If you are using a hosting service, there should be some special procedure to get the permissions changed. In my experience, changing the permissions for this purpose has spotty success and can lead to a drawn-out hassle. I suppose it's worth it to have a reasonably secure server.

Labels: , , ,

Tuesday, November 01, 2011

VS.NET 2010 fails to compile program created earlier in VS.NET 2008

Trying to recompile a program in Visual Studio 2010, which was originally created using Visual Studio 2008 (which is still on my PC), I got this baffling message:

Error 9 Task failed because "sgen.exe" was not found, or the correct Microsoft Windows SDK is not installed. The task is looking for "sgen.exe" in the "bin" subdirectory beneath the location specified in the InstallationFolder value of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SDKs\Windows\v6.0A. You may be able to solve the problem by doing one of the following: 1) Install the Microsoft Windows SDK for Windows Server 2008 and .NET Framework 3.5. 2) Install Visual Studio 2008. 3) Manually set the above registry key to the correct location. 4) Pass the correct location into the "ToolPath" parameter of the task.

These suggestions were too bizarre even to consider, and so I Googled. Right away, I found this nice blog entry which helped me out, and just in case it were to go away, I'm duplicating the helpful information it contains below. So, courtesy of the "dukelupus" blog, everything after this paragraph is copied verbatim from that blog.

Changing the registry key will not help nor will adding C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\ to the path. I did not try other solutions...I used FileMon to check what Visual Studio is looking for – and it appears that it will always look for that file at C:\WINDOWS\Microsoft.NET\Framework\v3.5\, which does not contain sgen.exe.

Just copy sgen.exe from C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\ to C:\WINDOWS\Microsoft.NET\Framework\v3.5\ and everything will now compile just fine. Here, to make your life easier, copy command:

copy /y “C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\sgen.exe” “C:\WINDOWS\Microsoft.NET\Framework\v3.5\”

Good luck!

Labels: , , ,

Wednesday, August 24, 2011

SQL Server Management Studio and the "Saving changes is not permitted" error.

Sometimes after a new install of Microsoft SQL Server Management Studio, you may get a default setting that prevents the user from changing the design of tables in the visual designer mode, which can be extremely frustrating, as it is not easy to figure out how to turn the checking off that is preventing table redesign. The error message will be announced in an annoying popup whose text is uncopyable:

"Saving changes is not permitted. The changes you have made require the folloing tables to be dropped and re-created. You have either made changes to a table that can't be re-created or enabled the option Prevent saving changes that require the table to be re-created."

Here's how to solve this in Microsoft SQL Server Management Studio 2008:

1) Go into the Tools...Options... menu

2) In the popup, on the left, expand "Designers" by clicking on the plus

3) In the "Table Options" shown to the right, MAKE SURE that "Prevent saving changes that require table re-creation" is NOT checked.

I've lost a few hours of my life to this nuisance. Hope this will help someone else out of the conundrum.

Labels: ,

Friday, June 10, 2011

ODBC workaround (Access to SQL Server) for 64-bit Windows 7

I've been avoiding 64-bit Windows due to various incompatibility rumors, but this case takes the cake, as it is entirely Microsoft's fault. My work place uses a variety of shared Access databases, located on a network drive, that connect via ODBC System DSN's to a SQL Server 2008.

Even though all DSN's appeared to be configured correctly on my colleague's brand new (64-bit) Windows 7 machine, and the ODBC connections passed their test, the actual database declined to connect to the server. Thanks to various discussion groups, we finally figured out that the graphical user interface accessible from the Administrative Tools applet in Control Panel actually brings up a 64-bit ODBC application, whereas we (for backwards compatibility) needed the strangely hidden 32-bit System DSN window. To run it, we had to browse to this path:

C:\Windows\SysWOW64\odbcad32.exe

Clicking on odbcad32.exe runs the 32-bit version of the ODBC connection setter upper. There, we re-created all the System DSN's, and finally the Access databases were happy.

ODBC

By default, the Windows GUI is presenting a 64-bit ODBC connection setter upper (which I believe is in the C:\Windows\system32 path somewhere. Going manually to the first path and running the application, then adding the ODBC connections, makes it work.

In the meantime, 3 days of work were lost to this problem.

Labels: , , , ,

Friday, April 22, 2011

Richard Stallman's speech "A Digital Society", Apr 20, 2011

Richard Stallman in 2008
This week Richard Stallman, founder of the free software ("free as in freedom") movement, gave a talk at the University of Pennsylvania. It's been far too long since I heard a good radical plying his trade. Stallman is blunt and uncompromising, fully intending to stir folks up and make them think about how they may be being manipulated by those who wish to track and surveil and control. His talk included some often amusing provocations:
1. He calls the Kindle a "Swindle" and the Nook a "Schnook" because we are not allowed to lend a book we have purchased to our friends, and because the seller can recall a book from your possession at any time (as Amazon once famously did with George Orwell's "1984").
2. Modern devices (including cable boxes, cell phones, and computers) often surveil us and may subject us, later, to censorship.
3. The use of proprietary and closed data formats and software increases the chances of our privacy being invaded, and decreases our ability to learn the art of software programming.
4. If only all the effort and expense devoted to the public war on sharing, deemed by Stallman as an attack on community (including Digital "Restrictions" Management (DRM), and the Digital Millenium Copyright Act that makes it illegal for us to know how some software works)--if only this effort could be redirected towards, say, preventing automobile accidents, thousands of lives per month might be saved.
5. The precariousness of our right to access the global internet is lamentable, because according to Stallman, "the U. S. governement has been bought".
6. According to Stallman, it is every citizen's duty to poke Big Brother in the eye.
7. Blame the government, he says, and blame the companies the U S Government works for.

Stallman gave permission to publish a recording of his speech, ironically made by a student on her devilish iPhone--but only if converted to free audio format Ogg Vorbis first. I converted the .m4a files with the help of a utility inappropriately named the "Free Convert M4A to MP3 AMR OGG ACC Converter"; it stubbornly truncated its conversions until I agreed to pay $25. Here in two parts is the recording: Part 1 (< 3 minutes) and Part 2 (1 hr+). These are .zip files--sorry to make you unzip them, but I don't think my hosting service can accomodate much realtime streaming.

One of the more interesting parts came during the Question and Answers at the end, after the recording ended. Stallman explained that, in the 1990's, his organization split into two parts: 1) Free Software Foundation (Stallman's first priority), and 2) the "Open Source" movement. Stallman stated that Open Source is focused on finding the best ways to develop software as a group, whereas FSF remains focused on the ethical issues surrounding misuse of digital technology and that, while he does not disagree with Open Source aims, if he were to start advocating for Open Source purposes, the ethical issues would get lost in the morass of it all. And so he kept his organization completely separate and finds it frustrating that people often confuse Open Source (a term he claims was coined at the time of the split) with Free Software Foundation.

Labels:

Thursday, May 20, 2010

The challenge of printing a .NET textbox

On Yahoo Answers, someone asked how to print the content of a RichTextBox control in .NET. I did finally manage to print a (simpler) TextBox control in .NET, and that was difficult enough. I am documenting that here for myself or anyone else
facing this challenge.

First, a little rant. Before the .NET framework was released (~2000), Microsoft provided a handy .print method on the RichTextBox and TextBox controls, and all a programmer needed to do was call it. But in Microsoft's almighty wisdom (and trying to be just like Java), the simple and highly useful .print method was removed in .NET, and now you have to do all the following steps successfully. And note, it's just as difficult in Java--what were the language developers thinking? I imagine they were thinking to provide maximum flexibility, but why not provide a quick-and-dirty out for the rest of us?

In the example below, my form (called JobForm) is printing the contents of a TextBox control (called textBoxRight).

STEP 1: DECLARE VARIABLES

You need a bunch of special fields in your form code to keep track of printing information. To get started, just use this:

#region printing declarations

        private Font midlistRegular = new Font(
            "San Serif",
           (float)7.8,
           FontStyle.Regular,
           GraphicsUnit.Point);

        private Font midlistRegular1 = new Font(
            "San Serif",
           (float)7.6,
           FontStyle.Bold,
           GraphicsUnit.Point);

        private int printMargin = 1;

        private int lastPosition = 0;

        private int lastIndex = 0;

        /// 
        /// Set in Paint (of form) for use when printing
        /// 
        private float screenResolutionX = 0;

        #endregion printing declarations

STEP 2: INSTANTIATE A PRINTDOCUMENT OBJECT AND CREATE ITS EVENT CODE

You need a special object that raises an event each time one page has been printed and causes the next page to be printed. It is a non-visible control and is called "PrintDocument" (in library System.Drawing.Printing).

In the Windows Designer, drag a non-visible "PrintDocument" control onto your form (System.Drawing.Printing.PrintDocument). It will be instantiated on your form as "printDocument1". Double-click the PrintDocument control on the form to create the "PrintPage" event and give it the following code (using your TextBox name instead off "textBoxRight"):

private void printDocument1_PrintPage(object sender, PrintPageEventArgs e)
        {
            try
            {
                e.HasMorePages = this.PrintOnePage(
                    e.Graphics,
                    this.textBoxRight,
                    this.printDocument1,
                    this.screenResolutionX);
            }
            catch (Exception ex)
            {
                MessageBox.Show(ex.Message);
            }
        }

STEP 3:

Now create the PrintOnePage() method needed by the above code. Although the display truncates this code, if you copy it using Control-C, all the code will be grabbed. Use the boilerplate code below unchanged (and I apologize that it's so ugly):

/// 
        /// Goes through each line of the text box and prints it
        /// 
        private bool PrintOnePage(Graphics g, 
                 TextBox txtSurface, PrintDocument printer, 
                 float screenResolution)
        {
            // Font textFont = txtSurface.Font;
            Font textFont = 
                   new Font("San serif", 
                            (float)10.0, FontStyle.Regular, 
                            GraphicsUnit.Point);

            // go line by line and draw each string
            int startIndex = this.lastIndex;
            int index = txtSurface.Text.IndexOf("\n", startIndex);

            int nextPosition = (int)this.lastPosition;
            // just use the default string format
            StringFormat sf = new StringFormat();

            // sf.FormatFlags = StringFormatFlags.NoClip | (~StringFormatFlags.NoWrap );
            // get the page height
            int lastPagePosition = (int)(((printer.DefaultPageSettings.PaperSize.Height / 100.0f) - 1.0f) * (float)screenResolution);
            // int resolution = printer.DefaultPageSettings.PrinterResolution.X;

            // use the screen resolution for measuring the page
            int resolution = (int)screenResolution;

            // calculate the maximum width in inches from the default paper size and the margin
            int maxwidth =
                (int)((printer.DefaultPageSettings.PaperSize.Width / 100.0f - this.printMargin * 2) * resolution);

            // get the margin in inches
            int printMarginInPixels = resolution * this.printMargin + 6;
            Rectangle rtLayout = new Rectangle(0, 0, 0, 0);
            int lineheight = 0;

            while (index != -1)
            {
                string nextLine = txtSurface.Text.Substring(startIndex, index - startIndex);
                lineheight = (int)(g.MeasureString(nextLine, textFont, maxwidth, sf).Height);
                rtLayout = new Rectangle(printMarginInPixels, nextPosition, maxwidth, lineheight);
                g.DrawString(nextLine, textFont, Brushes.Black, rtLayout, sf);

                nextPosition += (int)(lineheight + 3);
                startIndex = index + 1;
                index = txtSurface.Text.IndexOf("\n", startIndex);
                if (nextPosition > lastPagePosition)
                {
                    this.lastPosition = (int)screenResolution;
                    this.lastIndex = index;
                    return true; // reached end of page
                }
            }

            // draw the last line
            string lastLine = txtSurface.Text.Substring(startIndex);
            lineheight = (int)(g.MeasureString(lastLine, textFont, maxwidth, sf).Height);
            rtLayout = new Rectangle(printMarginInPixels, nextPosition, maxwidth, lineheight);
            g.DrawString(lastLine, textFont, Brushes.Black, rtLayout, sf);

            this.lastPosition = (int)screenResolution;
            this.lastIndex = 0;
            return false;
        }

STEP 4: ADD CODE TO YOUR FORM'S PAINT EVENT

In Windows Designer, open your form in graphical view mode. Open the form's Properties Window and click the lightning bolt to see events. Double-click on the form's Paint event to create it, and paste the boiler-plate code from my Paint event below into your form's paint event (your event will have a different name, using your form's name, than mine does below):

private void JobForm_Paint(object sender,
                            System.Windows.Forms.PaintEventArgs e)
        {
            // save the form height here
            this.screenResolutionX = e.Graphics.DpiX;

            // set the last position of the text box
            this.lastPosition = (int)this.screenResolutionX;
        }

STEP 5: ACTUALLY PRINT THE TEXTBOX

To actually print the contents of the TextBox, you'll need code like this in a print menu or button event:

PrintDialog printDialog1 = null;
 printDialog1 = new PrintDialog();
        if (printDialog1.ShowDialog() == DialogResult.OK)
        {
             this.printDocument1.PrinterSettings = printDialog1.PrinterSettings;
             this.printDocument1.Print();
        }

I haven't paid huge attention to all the above code, once I got it working. I snarfed much of it from various sources on the web (thank you, bloggers!). It could be enhanced or cleaned up a lot. Maybe this will help another programmer get it done.

Labels: ,

Thursday, March 04, 2010

Wrestling to sort the ASP.NET GridView

It's supposed to be easy, and practically codeless to use, and it is--sometimes. When a GridView is to be populated with the same dataset every time the page loads. But I had a drop-down list where a condition had to be selected, and based on that, the grid then had to be populated. I made it to the point where I got the dropdown selecting, and the grid populating, but there were 2 problems: paging didn't work, and sorting didn't work. I decided to turn paging off, so sorting was my last remaining issue.

Umpteen useless web articles later, I resorted to the paper books stashed on my shelf at home. First stop was Murach's ASP.NET 2.0, which is alleged to be so good. But it held no love for me. Second stop was Dino Esposito's "Programming Microsoft ASP.NET 2.0: Core Reference"--and finally, I got the help I needed.

I'm blogging about this because a good book deserves real credit. Many mysteries were unraveled by the Esposito book, including that I needed to explicitly re-populate the GridView when its "Sorting" event fired. Esposito's directions were extremely explicit: use the "Sorting" event's GridViewArgEvents ("e" parameter) to find out the sort key, and write a special stored procedure that uses the sort key to ORDER the data differently. These last bits of information were the treasure that finally allowed me to get sorting to work.

I'm posting a copy of the rather odd-looking stored procedure that I ended up using below for your edification. The "@order_by" parameter names the column on which to sort, and the odd way of constructing the query from strings allows brackets to fit around any strange or keyword column names:

CREATE PROCEDURE [dbo].[lsp_NRSA_find_counts_and_person_by_naded] 
@naded_id int,
@order_by nvarchar(12)
AS

BEGIN

 IF @order_by = ''
 BEGIN
  SET @order_by = 'slide'
 END
 
 EXEC ('SELECT * ' + 
 'FROM vw_NRSA_cemaats_by_count_and_person ' +
 'WHERE naded = ' + @naded_id + 
 ' ORDER BY [' + @order_by + ']')

END

Labels: , ,

Sunday, December 13, 2009

Boot Camp Revisited

I've been using Boot Camp to run Windows XP on my Macbook since Mac OS X version 4 ("Tiger"). The first version drivers were crap and it was difficult to get XP installed correctly. But once XP was installed, I was able to run my development tools on it as well as on any native Intel PC.

That all improved quite a bit in version 10.5 ("Leopard"). I was able to update the drivers on my XP installation, and that was a big improvement.

Things went along well for about 2 years. Then, the machine started exhibiting problems. The fan would suddenly come on, and then XP would reboot itself without asking me--just once each day after I turned the machine on. I feared hardware--was it the RAM I had added, perhaps? But finding no cause, I just lived with the issue until just after the 3-year warranty ran out on the machine.

As soon as the warranty expired, XP crashed and I could no longer boot into it. And from Mac OS X, I was unable to use boot camp to remove the XP partition to start over. During a work crisis, needing my XP machine, I had to buy a new Windows laptop and set the MacBook aside for the time being.

A few weeks later, with work settling down, I went back to the Macbook to figure out how to make it run XP again. I upgraded to the latest Mac OS X ("Snow Leopard") but boot camp still refused to run. So I reinstalled the OS from scratch, but apparently I was still left with the original partitions. Mac OS X did not provide me a way to blow away and recreate the partitions during install. I was feeling seriously offended at this point. Was my investment in a dual-boot machine to be for nothing?

After much reading of forum posts, it seemed likely that a large file had located itself somewhere in the middle or near the end of the drive, and Mac OS X could not defragment the partitions, or even delete them. Eventually, I muddled through. I bought an external disk, managed to copy Mac OS X onto it and make it bootable. Running from the USB disk, I was then able to use a disk utility to blow away and recreate the partition for Mac OS X on the main drive. Then I copied the Mac OS back onto the main drive. All this was facilitated by a great free Mac utility called SuperDuper, which I do very much appreciate. Having gone through these contortions, I was at last able to reinstall Windows XP. It all took staggering amounts of time, during which I considered just junking the Macbook.

For those who claim that Mac's are so superior to Windows, I post this. I can't see a speck of difference in the hassle factor of either operating system; sooner or later, arcane knowledge and extreme patience is required. How do non-geeks ever cope?

The good thing about having both OS's around is competition--the two companies do push each other to do better things. In the latest version of Mac OS, for example, networking is a breeze and I can read my Windows drives and the printer just installed without trauma. Yay! Why couldn't I have had that 3 years ago, Apple?

Labels: , , , ,

Tuesday, November 10, 2009

Sql Server Management Studio 2008 logon persistence problem

I very often open SQL Management Studio 2008 for the same server. Problem is, the very first time I accessed this server, I did so using a different user name and password. Forever after henceforth, SQL Management Studio refused to remove this old server entry from my dropdown list in the "Connect to Server" popup window.

This made it necessary to click to change the user and enter a new password about 80 times per day--all because this premium, allegedly very smart sophisticated software has decided never to forget a former entry--even though I have deleted the Registered Server and recreated and so forth. I'm frankly angry about this nuisance, which wouldn't matter if I didn't have to open and close the tool so many times per day.

If anyone has any concrete suggestion on how I can defeat this thing, please let me know. I see from the internet that I am not the only one having this frustration, but I have yet to find any blog entry or suggestion to alleviate the problem.

UPDATED with an answer in Aug. 2010:

For Sql Server Management Studio 2008 on Windows XP, to restart all your logins, delete the file:

C:\Documents and Settings\%username%\AppData\Roaming\Microsoft\Microsoft SQL Server\100\Tools\Shell\SqlStudio.bin

or possibly:

C:\Documents and Settings\[user]\Application Data\Microsoft\Microsoft SQL Server\100\Tools\Shell

----

For Sql Server Management Studio 2008 on Windows 7, to restart all your logins, delete the file:

C:\Users\%username%\AppData\Roaming\Microsoft\Microsoft SQL Server\100\Tools\Shell\SqlStudio.bin

Labels: , ,

Monday, September 28, 2009

Delete a maintenance plan after changing SQL Server 2008 name


I've seen a few posts on how to delete an older maintenance plan after you've changed the computer name for a default installation of SQL Server 2008, but none of the posts fully solved the problem. Below is what I had to do--with the caveat that you have to find your own id's:

USE msdb

DELETE 
FROM dbo.sysmaintplan_log 
WHERE subplan_id = '36F8247F-6A1E-427A-AB7D-2F6D972E32C1'

DELETE 
FROM dbo.sysmaintplan_subplans 
WHERE subplan_id = '36F8247F-6A1E-427A-AB7D-2F6D972E32C1'

DELETE 
FROM dbo.sysjobs 
WHERE job_id = '3757937A-02DB-47A6-90DA-A64AE84D6E98'

DELETE 
FROM dbo.sysmaintplan_plans 
WHERE id = 'C7C6EFAA-DA4D-4097-9F9F-FC3A7C0AF2DB'

Labels: ,

Sunday, May 03, 2009

C# code to get schema of an Access table

I just wanted to dump the schema of a table in Microsoft Access. There is a lot of code on the web which purports to do this, but most of it didn't actually work, and most of it was not in C# (my current preferred language). So I am posting this here for anyone that needs it. Just change the path to the database file in the first line of code, and the name of the table ("taxa" in my example) in the second line of code. This program dumps the table schema to a file created by the LogFile object (also attached below) located in the application folder. You'll need to add "using System.Data.OleDb;" to the top of the file. Or, just download the code here. The code:




OleDbConnection conn =
 new OleDbConnection(
    "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" +
    "C:\\phycoaide\\phycoaide.mdb;Persist Security Info=False;");

// retrieving schema for a single table
OleDbCommand cmd = new OleDbCommand("taxa", conn);
cmd.CommandType = CommandType.TableDirect;
conn.Open();
OleDbDataReader reader =
 cmd.ExecuteReader(CommandBehavior.SchemaOnly);
DataTable schemaTable = reader.GetSchemaTable();
reader.Close();
conn.Close();

LogFile.WriteLine(" ");
foreach (DataRow r in schemaTable.Rows)
{
 LogFile.WriteLine(" ");
 foreach (DataColumn c in schemaTable.Columns)
 {
    LogFile.WriteLine(c.ColumnName + ": " + r[c.ColumnName]);
 }
}
MessageBox.Show("done");


The LogFile class creates a file in the application folder from which your program runs:




// 
// No copyright; free for reuse by anyone
// 
// Pat G. Palmer
// ppalmer AT harbormist.com
// 2009-05-04
// Opens a single-threaded log file to trace execution
namespace Ansp
{
    using System;
    using System.IO;            // file readers and writers
    using System.Windows.Forms; // Application object

    /// 
    /// Singleton that appends log entries to a text file
    /// in the application folder.  If the file grows
    /// to be too large, it deletes itself and starts over.
    /// The file is kept open until the application ends
    /// and implements the "dispose" pattern in case things
    /// do not end gracefully.
    /// 
    public class LogFile : IDisposable
    {
        private static int maxsize = 470000;
        private static string fileSuffix = "_log.txt";
        private static string fileSpecification;
        private static StreamWriter filewriter;
        private static LogFile instance;

        private LogFile()
        {
        }

        ~LogFile()
        {
            this.Dispose(false);
        }

        public static void InitLogFile()
        {
            if (instance == null)
            {
                instance = new LogFile();
            }

            string stringMe = "InitLogFile: ";
            try
            {
                if (Application.ProductName.Length == 0)
                {
                    fileSpecification = Application.StartupPath + "\\" +
                       "Test" + fileSuffix;
                }
                else
                {
                    fileSpecification = Application.StartupPath + "\\" +
                       Application.ProductName + fileSuffix;
                }

                // restart file if too big
                if (File.Exists(fileSpecification))
                {
                    FileInfo myFileInfo = new FileInfo(fileSpecification);
                    if (myFileInfo.Length > maxsize)
                    {
                        File.Delete(fileSpecification);
                    }

                    myFileInfo = null;
                }

                // restart file with appending
                filewriter = new StreamWriter(
                   fileSpecification, true, System.Text.Encoding.UTF8);

                // start log with standard info
                WriteLine("\r\n---------------------------------------------");
                string tempString = stringMe +
                    Application.ProductName + " " +
                    Application.ProductVersion +
                    "log opened at " + 
                    DateTime.Now;
                WriteLine(tempString);
                WriteLine(stringMe + "username=" + SystemInformation.UserName);
                WriteLine(stringMe + Application.StartupPath);
            }
            catch
            {
            }
        }

        public static void WriteLine(string myInputLine)
        {
            try
            {
                if (instance == null)
                {
                   InitLogFile(); // first time only
                }
                if (myInputLine.Length != 0)
                {
                   filewriter.WriteLine(myInputLine);
                   filewriter.Flush(); // update file
                }
            }
            catch
            {
            }
        }

        public static void Close()
        {
            instance.Dispose();
        }

        /// 
        /// Implement IDisposable.
        /// Do not make this method virtual.
        /// A derived class must not override this method.
        /// 
        public void Dispose()
        {
            this.Dispose(true);
            //// Now, we call GC.SupressFinalize to take this object
            //// off the finalization queue and prevent finalization
            //// code for this object from executing a second time.
            GC.SuppressFinalize(this);
        }

        private void Dispose(bool disposing)
        {
            if (disposing)
            {
                // no managed resources to clean up
            }
            if (instance != null)
            {
                if (filewriter != null)
                {
                    try
                    {
                        filewriter.Flush();
                        filewriter.Close();
                    }
                    catch
                    {
                    }

                    filewriter = null;
                } // end if filewriter not null
            } // end if instance not null
        }

    } // end class LogFile()
} // end namespace

Labels: ,

Sunday, June 01, 2008

using Ruby, Tk and Eclipse

It took me a long time to learn howto do this. I found that on Windows XP, I had to use exactly these versions and install them in this order (rebooting in between):

* Ruby one-click installer v 1.8.6
* ActiveState Tcl v 8.4.x (NOTE: 8.6 did NOT WORK)

After this, the machine path was set to find both programs, AND I was able to configure Eclipse to compile using this version of Ruby, so I can now use Eclipse as the IDE. The trick is, when in the Ruby perspective, first create a Ruby project, and at that point, Eclipse gives you a chance to select a different Ruby virtual machine. Make sure then that you choose the Ruby.exe file installed by the Ruby one-click installer.

Labels: ,

Sunday, May 11, 2008

Ruby and Tk on Windows may as well be a fantasy

Has anyone every actually tried to use tk to build a GUI with Ruby on
Windows
? Yes, Ruby runs on Windows. Yes, Tk runs on Windows. But the two do not communicate. Online instructions for binding Tk and Ruby are primarily for Linux, and ominously, use of Tk seems to require that one build one's own Ruby source from scratch in order to link Ruby with Tk.

If so, that is very discouraging and I probably won't consider it
worth doing. Furthermore, I would say it is false advertising the way
so many web sites glibly claim "and it runs on Windows too".

There are numerous discussion threads on the web where people have
sought help with this tk-Ruby linkage problem. These discussions threads seem to degrade shortly into hostility or contempt from Linux zealots towards the asking party, who continues to claim "but it doesn't work". I did finally find one posting claiming to achieve a solution. That method would require installation of multiple build tools including a C++ compiler! Not only would it take me at least two days to make it work, but also there is little hope of getting lab administrators where I am teaching to install something that burdensome.

Please tell me that Tk and Ruby on Windows is not an over-inflated pipe-dream. I hope to hear otherwise, but until I do so, Ruby--and more importantly, the much touted supportive Ruby community--has fallen somewhat in my estimation.

Labels: ,

Saturday, February 16, 2008

Lowcost MacBook memory upgrades

Recently a friend advised me that she had upgraded her MacBook memory at very low cost by contacting the memory manufacturer directly. So I gave it a try. For less than $70, I was able (in one week) to upgrade my first-gen MacBook's RAM from 512K to 2Gb. The http://crucial.com/ website will scan your system for you and identify the correct memory. Hard to believe, but apple.com would have sold me the same memory for $300.

I had to install the memory myself. A quick Google brings up several how-to sites. It worked the first time, but it did help that I knew about how much pressure to apply pushing the chips in (quite a bit, actually). Newbies might not press hard enough, or might press too hard and break something. Be careful!

One important warning, though; if you're MacBook is under warranty, be sure and retain your old memory devices. You'll need to reinstall them if you have to get the unit serviced (because otherwise, so they say, the warranty might not be valid).

Labels: ,

Thursday, December 27, 2007

Suppressing VS.NET Compiler Warnings (2005, 2008)

"1591" was the magic string that I needed, and this is the sorry tale of how to find that out.

As a rule, I don't like suppressing compiler warnings, but there is a time for everything. My time came when I inherited a huge mass of ill-behaving C# code and began adding XML comments. I was immediately overwhelmed by hundreds of warnings that said "Missing XML comment for publicly visible type". I wanted to compile the comments that I had added without being nagged by the compiler for not having added the remaining 300 possible XML comments as well.

I knew that Visual Studio 2005 would let me suppress specific warnings in the project build properties. However, I didn't know the warning number that I needed to supply. Microsoft, in their great goodness, has suppressed showing of warning numbers--they only show the text. A few googles later, I knew that it was either 1591 or CS1591, but no one told me anywhere, in general, now to find the full list of warning numbers. I've wanted this list many a time in the past, so I set out to find out, once and for all.

Eventually, I found that I needed to start at the top-level C# Reference page in MSDN2 (for the appropriate version of VS.NET), then search on "compiler warning " + "warning text". So searching on "compiler warning missing XML comment" got me the precious warning number that I needed, which is CS1591. But then I had to psychically understand, of course, that the CS must be left off, and only the 1591 entered.

See my glorious build screen which finally suppressed the evil hundreds of unwanted warnings:



UPDATE in Oct 2008: Now that I am using Visual Studio 2008, I have learned that I can right-click over a warning in the Error List pane, and it will pop up documentation about the warning that includes its error level and number, and from that, I can derive the 4 digits to place in the suppress box of the project Build properties. It is not necessary to search on the Microsoft website. I don't know if this feature was present in Visual Studio 2005 (and I just didn't know it), or not.

Labels: ,

Friday, September 28, 2007

You go, Citizendium

I have a love-hate relationship with Wikipedia, which has made huge amounts of information freely available, but whose contents cannot be controlled for quality. Thus, I became an author (and then an editor) for Citizendium, a relatively new, expert-led online encyclopedia project. It was founded by Larry Sanger, a co-founder of Wikipedia, and is intended to be a more accurate and credible, publicly owned and authored encyclopedia.

For a wiki to be successful, a critical mass of participants is needed. Your expertise is urgently needed to make Citizendium a success. Please consider joining Citizendium soon in the Computers Workgroup. In Citizendium, people author using their real identities, and expert editors provide gentle oversight. To join Citizendium, simply apply here.

If there are going to be large quantities of information about computers available online, let's make sure it's of high quality.

Saturday, July 21, 2007

Using Ruby's WSDL driver to call a Microsoft C# SOAP web service

I've been learning Ruby and recently tried to call a Microsoft C# SOAP web service in Ruby. First thing I needed to do was upgrade Instant Rails to use the latest soap4r library v1.5.7 (it came with v1.5.5). How to do that is shown here and further explained here.

After that, it took a while to learn how to 1) suppress warnings that turned out to be non-critical, and 2) figure out the syntax needed to send parameters. The soap4r library's documentation page, although posted all over the internet, is essentially empty and useless. To save others some trouble, here's the code:

Web service:RSS Service

Method without parameter:getAllSiteNames()

Method with one parameter:getURL(siteName)

Labels: ,

Monday, October 09, 2006

Reasonably priced heat reduction for MacBook laptop



I am using a low-cost, inverted sink cushion mat (manufactured by InterDesign), pictured above, underneath my MacBook. It has good traction, is light-weight, provides plenty of air flow and heat insulation, and even looks nice (can barely be seen). It's a decent alternative to spending ~$30 for some kind of industrial laptop cooling platform.

Wednesday, September 27, 2006

Win XP Pro on MacBook at last!




After a week of failed install efforts, the sixth attempt finally seems to have succeeded. This time, I went all the way back to Mac OS X, let it zap the old partition so I could start from scratch (using BootCamp 1.1.1 Beta). After installing XP, I ran into a new problem--when trying to install the web server (IIS 6.0), XP said it could not copy files from the DVD. Given the previous problems I'd had getting XP to run on the MacBook, I feared it meant another crash was coming, but after a long search, I am happy to report that it was Microsoft's problem, and they have a fix procedure that worked for me. Microsoft's Knowledge Base contains this article (Knowledge Base #555268), and the fix described there worked!

So, for the first time, the MacBook is fully loaded with my development tools on Windows XP Pro SP2, and I can easily boot between Mac OS X and Windows XP. Hallelujah. If it will just be stable, now, I'll have finally accomplished two goals at once--getting myself a faster (Windows) laptop, and getting myself a Mac OS X testbed for Java applications, plus learning more about Mac OS X which is pretty neat in many ways.