Packetizer Logo
 

Paul E. Jones' Blog

Preventing Windows 10 from Rebooting after Installing Updates

September 18, 2016

Microsoft made the dumbest decisions I've ever seen with Windows 10 to simply download updates, install them, and then reboot your machine for you! I've lost work I was doing several times and finally decided to track down a solution.

Here is what seems to be working for me.

1) Run "gpedit.msc". 2) Under "Computer Configuration" -> "Administrative Templates" -> "Windows Components" -> "Windows Update"... 2a) Select "Configure Automatic Updates", select "enabled", select "3 = Download the updates automatically and notify when they are ready to be installed", uncheck "install during automatic maintenance". I also checked "install updates for other MS products", though I'm not sure if this has any effect. 2b) Under "No auto-restart when logged on users for scheduled automatic updates" select "Enabled". 3) Run "gpupdate /force".

This works for Windows 10 Pro. I believe that "Home" versions may not have the ability to manipulate policies, so you just have to live with the crap from Microsoft, I guess.

Permalink: Preventing Windows 10 from Rebooting after Installing Updates

Dynadot Adds Support for DNSSEC

November 24, 2013

I posted a blog entry talking about configuring DNSSEC. When I wrote that blog entry, very few registrars actually supported DNSSEC. One of the registrars that I use (Dynadot) did not. Today, though, they do! I was excited to discover that, though I never saw an announcement about it.

I did a little searching via Google and learned that there are actually several registrars that now support DNSSEC! Perhaps people are finally taking security a little more seriously.

I also found another list of registrars that includes, among other things, a clear indicator as to whether the registrar supports DNSSEC or not. This might be useful when you are looking to register or transfer a domain name. For whatever reason, ICANN's list still does not show that Dynadot supports DNSSEC.

Permalink: Dynadot Adds Support for DNSSEC

Using WebFinger to Simplify Bitcoin Payments

September 28, 2013

For a number of years, users of Bitcoin have expressed a desire to use email addresses as a means of sending Bitcoin payments. The reason is that the typical bitcoin address looks like this: 17XoqvUCrf12H7Vc7c7uDxib8FDMXFx2p6. Trying to accurately convey that string of characters to somebody so they can enter it in manually is error prone. It's far simpler to request them to just send money to an address like paulej@packetizer.com.

Making bitcoin friendlier for the average person involves the use of WebFinger. WebFinger is a very light-weight protocol published by the IETF September 27, 2013 that allows one to map a URI (like an email-type address) to a set of other URIs. As a very simple example, this is a subset of what you get if you query my WebFinger server for paulej@packetizer.com:

Source Code

$ curl https://packetizer.com/.well-known/webfinger?resource=acct:paulej@packetizer.com

{
  "subject" : "acct:paulej@packetizer.com",
  "aliases" :
  [
    "h323:paulej@packetizer.com"
  ],
  "properties" :
  {
    "http://packetizer.com/ns/name" : "Paul E. Jones",
    "http://packetizer.com/ns/name#zh-CN" : "保罗‧琼斯",
    "http://packetizer.com/ns/activated" : "2000-02-17T03:00:00Z"
  },
  "links" :
  [
    {
      "rel" : "http://webfinger.net/rel/avatar",
      "type" : "image/jpeg",
      "href" : "http://www.packetizer.com/people/paulej/images/paulej.jpg"
    },
    {
      "rel" : "http://webfinger.net/rel/profile-page",
      "type" : "text/html",
      "href" : "http://www.packetizer.com/people/paulej/"
    },
    {
      "rel" : "http://packetizer.com/rel/blog",
      "type" : "text/html",
      "href" : "http://www.packetizer.com/people/paulej/blog/",
      "titles" :
      {
        "en-us" : "Paul E. Jones' Blog"
      }
    },
    {
      "rel" : "http://schemas.google.com/g/2010#updates-from",
      "type" : "application/atom+xml",
      "href" : "http://www.packetizer.com/people/paulej/blog/blog.xml"
    },
    {
      "rel" : "http://bitcoin.org/rel/address",
      "href" : "bitcoin:17XoqvUCrf12H7Vc7c7uDxib8FDMXFx2p6"
    }
  ]
}

What you see in the output is a set of link relation types and links. The last one on the page is a bitcoin address. Bitcoin wallet software could issue a query to my WebFinger server and receive this address and use it. It’s that simple.

What's presently shown in my example is static, but it would not have to be. For example, if I used blockchain.info as my wallet, I might put something like this into WebFinger:

Source Code

{
  "rel" : "http://bitcoin.org/rel/payments",
  "href" : "bitcoin:17XoqvUCrf12H7Vc7c7uDxib8FDMXFx2p6?
           request=https%3A%2F%2Fblockchain.info%2Fr%3Fid%3Dpaulej"
}

Now, when the user enters my email address, they basically get back a payment API address. I would assume the subsequent query the wallet makes to blockchain.info would contain the actual PaymentRequest message as per BIP70 (versus the static 17XoqvUCrf12H7Vc7c7uDxib8FDMXFx2p6).

To make things even simpler, we just do this:

Source Code

{
  "rel" : "http://bitcoin.org/rel/payments",
  "href" : "https://secure.packetizer.com/bitcoin_address/?account=paulej"
}

Note that if you do a GET on that URI on my server as of this writing, you get get a bitcoin address. I have not actually implemented BIP70.

None of these procedures have been adopted by the Bitcoin community, yet, but it does highlight simple and secure ways of conveying addresses that are less prone to error and use the familiar e-mail address.

Permalink: Using WebFinger to Simplify Bitcoin Payments

WebFinger Makes the Web Friendlier

September 16, 2013

WebFinger is a new IETF protocol that allows one to discover information about people and entities on the Internet. It is a RESTful protocol that returns a JSON object (referred to as a JRD) containing a set of aliases, properties, and links related to a given URI.

WebFinger is not a protocol that scours the Internet looking for information about people. Rather, it is a protocol that enables a requesting entity to retrieve specific information that is publically and purposely shared via a WebFinger server. To give a concrete example, suppose you are a member of a social networking site, wherein you can post your profile picture, publish your contact information (e.g., address, phone number, and email address), and your name. The social networking site probably has privacy mechanisms so that you can mark that information to be shared with only certain people, groups of people, or publically. If the social networking site implements WebFinger, then any information marked as “public” might be available via a WebFinger query.

Now, you might be asking yourself why anyone would care about this. Well, imagine visiting a blog and entering your email address in order to post a comment. If you publish information via WebFinger, it would be possible for that other blog to retrieve that information. So, you would not have to publish a new picture of yourself or re-enter your name. The blog could retrieve it automatically for you, just using your email address. That’s very cool.

Now, while WebFinger can work with any URI, typically clients and servers utilize the “acct” URI (refers to a user’s account) to query for information about a person. For example, my email address is paulej@packetizer.com and my acct URI is acct:paulej@packetizer.com. A blog I might visit would issue a query to the WebFinger server at packetizer.com asking for information about “acct:paulej@packetizer.com”. The response would be the JSON document I described above.

Just to show a simplified example, this is what part of the response message might contain if the server were queried using the “curl” command.

Source Code

$ curl https://packetizer.com/.well-known/webfinger?resource=acct:paulej@packetizer.com

{
  "subject" : "acct:paulej@packetizer.com",
  "aliases" :
  [
    "h323:paulej@packetizer.com"
  ],
  "properties" :
  {
    "http://packetizer.com/ns/name" : "Paul E. Jones",
    "http://packetizer.com/ns/name#zh-CN" : "保罗‧琼斯",
    "http://packetizer.com/ns/activated" : "2000-02-17T03:00:00Z"
  },
  "links" :
  [
    {
      "rel" : "http://webfinger.net/rel/avatar",
      "type" : "image/jpeg",
      "href" : "http://www.packetizer.com/people/paulej/images/paulej.jpg"
    },
    {
      "rel" : "http://webfinger.net/rel/profile-page",
      "type" : "text/html",
      "href" : "http://www.packetizer.com/people/paulej/"
    },
    {
      "rel" : "http://packetizer.com/rel/blog",
      "type" : "text/html",
      "href" : "http://www.packetizer.com/people/paulej/blog/",
      "titles" :
      {
        "en-us" : "Paul E. Jones' Blog"
      }
    },
    {
      "rel" : "http://schemas.google.com/g/2010#updates-from",
      "type" : "application/atom+xml",
      "href" : "http://www.packetizer.com/people/paulej/blog/blog.xml"
    },
    {
      "rel" : "http://bitcoin.org/rel/address",
      "href" : "bitcoin:17XoqvUCrf12H7Vc7c7uDxib8FDMXFx2p6"
    }
  ]
}

This document has a lot of useful information inside. For example, it provides my name, URLs to my picture, blog, RSS feed for my blog, and my Bitcoin address.

The last example is rather interesting. For those who are not familiar with Bitcoin, it is a relatively new digital currency that is growing in popularity. One of the challenges from a user perspective with Bitcoin is sharing one’s bitcoin address reliably with people. A bitcoin “address” looks like that long string of characters following “bitcoin:” in the example above. Typing that when trying to send somebody money is prone to error. WebFinger makes it much simpler by “discovering” the address using the more familiar e-mail address. So, as Bitcoin software clients are updated to support WebFinger, one would just enter “paulej@packetizer.com” to send money, for example. The software would add the “acct” URI scheme on the front, send the query to the domain, and then look for the bitcoin address(es) returned in the JRD.

WebFinger is already utilized by standards like OpenID Connect, which allows one to log into remote web sites using their account URI. This greatly simplifies the login process and the need to fill out lots of repetitive information when creating new accounts or associating two accounts.

Of course, since WebFinger is still new, it’s quite possible that your service provider does not yet support it. The good news is that it’s very simple to implement and there are already several open source implementations of client and server code.

Permalink: WebFinger Makes the Web Friendlier

Paranoia Leads to Excessive Use of Firewall Rules

June 24, 2013

All of us want to ensure our private information remains private and that data is not leaked onto the Internet. However, some IT departments simply go overboard in trying to secure information.

My wife recently worked for a company that would not allow any external communication by any employee without authorization from their management. Basically, without authorization there was absolutely no Internet access privileges at all. That’s certainly one way to control the leaking of information, though the same IT department had absolutely no means to prevent data from being copied to a flash drive. Thus, the policy must have been in place only to prevent leaking of information by “spyware” software that was unknowingly running behind the scene. That might have helped, but I doubt it. After all, there were many in the company with Internet access.

Her employer and many, many IT departments also practice something that absolutely makes little sense to me: blocking certain outbound ports. Sometimes, an IT department will block outbound UDP ports (all of them or ranges). Other IT departments will block nearly all outbound TCP ports. To what end? Is the intent to try to prevent leaking information to the Internet? If so, that is a pretty pointless exercise, if the IT department leaves port 443 (HTTPS) open. One could copy a company’s entire collection of data files right out through port 443. Further, software designed to steal information will exploit any potential hole. Whether there is a single port open or 65,535 ports open, it makes no difference. One is all that is needed.

Is the reason for blocking certain outbound ports to prevent employees from using certain software programs? If so, why? Is there truly a business reason to prevent use of certain applications, or is the practice just to demonstrate a certain level of control over employees “because we can”?

Since few reasons make little sense to me, I’ve come to conclusion that the practice of blocking outbound ports on a firewall is really something done out of paranoia. There appears to be a widespread fear of the unknown when it comes to the Internet. An expert in networking and hacking can get anything through a firewall if even one port is open, so blocking a bunch of ports if a futile exercise. What blocking ports does is create more frustration for end users and more work for IT departments as they try to figure out what ports to open for applications users want to use. What it really does not do is provide any real security, which is the claimed objective.

Permalink: Paranoia Leads to Excessive Use of Firewall Rules

Backing Up Files to Amazon S3 Using Pug

May 26, 2013

Like many other people, I have a need to do routine backups of my data. And like many others, I have to concern myself with not just my own files but, but everyone’s files on the network. Backing up data can be a mind-numbingly boring chore after a while, only made worse by the fact that I really have better things to do than to deal with backing up data frequently.

Years ago, I used to back up data to magnetic tape. Yeah, that was a long time ago, but it helps put things into perspective. I’ve had to deal with this problem far too long. I graduated with time from magnetic tape to other external storage devices, most recently being USB drives.

I tried a variety of techniques to backing up data, including full data backups to incremental backups. Incremental backups are so nice to perform, since they require far less time. However, if you have ever had to restore from an incremental backup, you know how painful that can be. You have to restore the main backup and then each individual backup. And it’s only made worse when what you need to restore is just one user’s files or a single file.

There is also the hassle of dealing with physical backup devices. You cannot simply store those on site, because they are subject to damage by fire, water sprinklers, etc. So periodically, I would take drives to the bank for storage in a safe deposit box. That just added to the pain of doing backups.

What I wanted was a backup solution that met these requirements:

  • I wanted a fully automated solution where I didn't have to be bothered with physical storage devices, storing physical drives in a bank vault, etc.
  • Once put in place, I wanted a solution that "just worked" and was fault tolerant
  • I wanted a solution that was secure
  • I wanted a solution that would work incrementally, storing only new or changed files each day (or even each minute)
  • I wanted a means of storing multiple versions of files, any one of which I could retrieve fairly easily
  • I wanted a solution that would not waste space by storing the same file multiple times (which is a problem when multiple users have the same copy of the same file)
  • I wanted a solution that would preserve locally deleted files for whatever period of time I specify (i.e., if a user deletes a file, I want to be able to recover it)
  • I wanted a solution that would allow me to recover any version of a file for some period of time
  • I wanted a solution that I could rely on even in the face of a natural disaster

Fortunately, cloud storage options came along, which had the potential for meeting many of the above requirements. Perhaps most popular among those cloud storage options is Amazon S3. Equally important, Amazon S3’s pricing is fairly reasonable. If one has 100GB of data to store, for example, the cost of storage is US$9.50 per month today. Amazon also has a very good track record of reducing the price of cloud storage as they find ways to reduce those costs, making it even more attractive.

The challenge I had was finding the right software.

I found several packages that would synchronize an entire directory with S3, but that did not meet many of my requirements. For example, if a user accidentally changed a file and then then the sync took place, the original file was lost forever. Likewise, I would not want an accidentally deleted file to be removed from cloud storage.

I also found tools that would create full backups and store those in the cloud. However, that approach is extremely costly in terms of storage and bandwidth. I also found some that worked incrementally, wherein one large backup is made and then only changes were made. The problem with that is that restoring the files meant that downloading the main backup file and then every single incremental backup. That’s painful even doing a full restore, but horribly painful when trying to restore just a single file.

So not finding a tool that really met my needs, I decided to write my own. The result is a backup program I call Pug. Pug replicates new and changed files to the cloud as they are discovered and deletes files from the cloud when they are deleted from local storage. Importantly, controls are given to the administrator to allow any number of versions of files to be maintained in cloud storage and to maintain deleted files for whatever period of time one wishes. Thus, I can maintain a complete history of all files stored in local storage and retrieve any single one of them.

In practice, I do not keep all versions of all files. While Pug will allow that, I keep at most 14 revisions of files. And if a file is deleted, I will keep the deleted file around for 90 days. These are just my configured policies, but you get the idea. Pug takes care of everything automatically. The intent is to have a flexible solution that will work in many enterprise environments.

Pug also meets my other requirements in that it never uploads the same file twice, regardless if there are 100 copies on the network. Before uploading to Amazon S3, it compresses files using gzip and then encrypts them using AES Crypt. This provides for bandwidth efficiency and security. (Note: do make sure you keep AES Crypt key files well-protected and stored somewhere other than Amazon S3 in plain view!)

I have been using Pug for a while and I’m quite pleased with the results. I no longer have to deal with routine backups to physical devices, the system automatically scans all locations (local file system files and files on NAS devices) looking for files to upload or delete, and, of course, the data is securely maintained off-site.

Permalink: Backing Up Files to Amazon S3 Using Pug

US Government Trying to Kill Mt. Gox?

May 16, 2013

Sensationalism at its best, I think. Here's an affidavit filed by the US government in a move to seize all of the US-based assets of Mt. Gox, the largest Bitcoin exchange in the world.

Here's are the facts:

  • Mt. Gox is a Japanese company owned by Mark Karpeles (and perhaps others)
  • Mr. Karpeles opened a company in the US that is, as I understand, a wholly-owned subsidiary of Mt. Gox called Mutum Sigillum, LLC
  • Mutum Sigillum, LLC opened a bank account at Wells Fargo, stating it was not a money-transmitting business
  • One can use Dwolla (US company) to put funds into your Mt. Gox account, and the money goes from Dwolla to the Mutum Sigillum LLC
  • Mutum Sigillum LLC credits your account at Mt. Gox, transferring money between its account in Japan (held at Sumitomo Mitsui Bank) and its account in the US (held at Wells Fargo)

So, the US government decided that Mutum Sigillum LLC is a "money transmitter". But where was the money transmitted to? It was transmitted to that person's account at Mt. Gox. This is effectively more-or-less the same notion of wiring money from your bank account in the US to your bank account in Hong Kong, I suppose. Is this a money transmitter? The definition is this:

The term "money transmitting business" means any business other than the United States Postal Service which provides check cashing, currency exchange, or money transmitting or remittance services, or issues or redeems money orders, travelers' checks, and other similar instruments or any other person who engages as a business in the transmission of funds, including any person who engages as a business in an informal money transfer system or any network of people who engage as a business in facilitating the transfer of money domestically or internationally outside of the conventional financial institutions system (Source: 31 USC § 5330(d)(1))

It sounds like the company might be transmitting money, since it is sending money from a customer's account in one place to the customer's account in a different place. The US is trying to argue the company does currency conversion, but Mutum Sigillum LLC does not -- only Mt Gox does that, which is the company based in Japan. So that part of the complaint from the US is nonsense. But, they are still "transmitting money", perhaps. The issue I have with this is that they are not transmitting money to other people. They are merely moving it from one account that YOU own to another account YOU own. So, transmitted where? To yourself.

So perhaps it still qualifies as a money transmitter. But, then again, who actually transferred the customer's money to Mutum Sigillum LLC? Dwolla did. They have a money transmitter license, I guess. Mutum Sigillum LLC just transferred its OWN money to their OWN account in another country. How was this transmitted? They used Wells Fargo, which is either also a money transmitter or exempt as per the end of that definition, since they are a "conventional financial institution". (You have to love how big banks are given a break here.) So, who transmitted the money? The bank did.

So, Dwolla, the bank, and Mutum Sigillum LLC are ALL money transmitters? I can appreciate the first two being classified as such, since they move money from one entity to another entity. Mutum Sigillum LLC does not do that: they transferred funds to themselves: the beneficiary they list on their wire transfers is themselves. Further, Mutum Sigillum LLC used established money transmitters to transfer money: nothing is hidden or secretive in the transaction.

I'm having a tough time seeing how the government is in the right here. It looks to me like they just do not like Bitcoins, feel they are a threat, and are looking for every opportunity to kill it for whatever reason.

What about Dwolla? Are they a money transmitter? In their terms of service, they say, "You also understand that we are not acting as a fiduciary, trustee, money transmitter, or providing any type of escrow service with respect to your funds, but only acting as the receiver’s agent." So, they declare they are not. And they might be able to make the argument since there is a credit union behind them actually performing money transfers. (Like Mutum Sigillum LLC.) So, it's OK for Dwolla to not have a license, but Mutum does need one?

And one does have to ask: if the company is not in compliance with the law, rather than taking all of the money -- which includes customer funds! -- why did they not first notify them of a compliance requirement? The heavy-handed action makes it damned hard for any startup to build a business. After all, there was certainly no criminal intent here. They just want to allow people to buy and sell Bitcoins.

This leads me back to one argument: the government just does not like Bitcoins.

Permalink: US Government Trying to Kill Mt. Gox?

Microsoft Will Remain the Leader in the Computing Industry

April 2, 2013

Reading my most recent post about Microsoft and their insane product activation procedures, one might surmise that I don’t like Microsoft products very much or loathe the company. Actually, I rather like Microsoft products and held the opinion for years that Microsoft did more to progress personal computing industry than any other firm out there. While all companies operate for the purpose of making a profit and Microsoft is no exception, the company truly produced some of the best software products on the market.

Consider Microsoft Office as one example. Hands down, it is the best productivity suite on the market. Yes, I’ve tried competitive products. There are some good alternatives out there, but they pale in comparison to Office. Some of competitive commercial products are really good (e.g., Corel’s office suite) and LibreOffice, which is a free open-source product, is also pretty good. However, they all fall short when it comes to matching features of Microsoft Office. More importantly, none of them do a perfect job at rendering documents created in Microsoft Office. Perhaps it is not fair to judge a product based on how well it reproduces a Microsoft Office document, but that is really a very important consideration for any potential replacement for Office. I recall many years ago when I made the move from WordPerfect to Word for Windows. Microsoft did a pretty good job at converting documents, but it was not perfect. Many who were heavily invested in WordPerfect simply could not make the move, but WordPerfect really dropped the ball by being so late to move to the Windows platform. They more-or-less opened the door to Microsoft. At the same time, Lotus and Borland were in a spat over the look and feel of a spreadsheet program and being so engaged in a pointless debate, they did not see Microsoft come in strong with a very good product for Windows, taking the spreadsheet business away from both of them. In the end, Microsoft was king of the productivity suites, and they have not stopped innovating. They continually improve the core productivity tools and have introduced great new products like OneNote.

However, Microsoft did drop the ball in a few areas. These days, I use Google’s Chrome browser because Microsoft failed to keep the innovation engine running with Internet Explorer. Internet Explorer 6 lived way too long and Microsoft essentially handed the business to Chrome and Firefox. That matters, because many of the more recent innovations in computing have been in the browser area. The browser is helping Google, for example, wrestle some control of the productivity space away from Microsoft by offering free browser-based productivity tools that, while not perfect, are free, “good enough”, and accessible from the browser.

Microsoft absolutely and completely dropped the ball in the mobile computing area. They dropped the ball so hard that it created a crater around the Redmond campus several feet deep. The first abysmal failure was the failure to make substantial updates to Windows Mobile. Those who developed applications for Windows Mobile know how horrible the Windows CE development environment was. However, it wasn’t just the operating system itself. End users do not care so much about the operating system. Rather, they care what they can do with it. Microsoft delivered a fairly crippled platform with Windows Mobile. This gave Apple the opportunity to come into the market and show how to do it right. And they did. The iPhone was an awesome piece of technology at the time.

The second major failing in the mobile computing space was Microsoft’s extremely slow move to get into the tablet computing market. That was most unfortunate, too, since Microsoft had been a leader in showing off what could be done in the tablet computing space. They just never seemed to get any products into production. Perhaps the issue was that they were so hung up on maintaining full compatibility with the legacy Windows desktop software.

Things are different today, though. Microsoft has learned a valuable lesson. Maintaining compatibility with all of the legacy applications is not important to consumers. What is important is providing a platform that consumers like. With the right platform, application developers will develop for that platform. As examples, we have the iPad and Android tablets. Apple and Google have stolen the tablet computing market, and it is refreshing to be able to use such thin, light-weight computing platforms to do the less-serious tasks.

Microsoft did wake up, though, and introduced two new mobile computing platforms: Windows 8 and Windows RT. Microsoft has received a lot of flak over Windows RT, but I actually think it was a good move. In fact, I would argue that Microsoft should not even have a traditional desktop mode on the tablet devices. The only reason it would be there is to allow legacy applications to run. However, that is only important in the short-term. If the desktop mode was not offered on tablets, application developers would develop to the APIs available on Windows RT and the applications would likely be more natural to the touch interface on the tablets.

In its rush to get into the tablet market, Microsoft screwed up the desktop. Windows 8 is a horrible desktop operating system. Yes, there are some improvements in efficiency, but I don’t need the start screen. I don’t want to scroll through a bunch of icons. I don’t want the screen flipping from “desktop mode” to the Start Screen to that screen one uses to find applications. It is really messy. People who buy Windows to run on the desktop want a desktop. People who buy tablets want a tablet. The two environments are different and should stay different. Unless Microsoft fixes this mess with Windows 9, then I fear Microsoft will risk driving even more of its users to Mac or Linux. Yes, I said Linux. If you’ve not taken a good look at operating systems like Ubuntu or Linux Mint, you should. Those systems provide a lot of functionality and can run Windows applications, either through tools like Wine or VirtualBox.

These days, I use a Nexus 7 as my tablet device. It’s really a perfect size for a lot of the more casual things I want to do. There is only one thing I would prefer more than a Nexus 7 and that would be a Windows RT device that is about the same size and had an RT version of Microsoft Office built in. If it came in with a comparable price tag, the Windows RT tablet would definitely win.

There is a lot of speculation in the market these days about Microsoft’s market strategy and many paint some very gloomy pictures for the company. I’m actually very upbeat about Microsoft’s future in the computing space. Yes, personal computer sales are down, but they are not dead. Contrary to some calling this the “post PC era”, I would argue the PC is here to stay. I cannot do my job on a tablet, and I know very few who can. Further, I would not want to do my job on a tablet. It’s painful, to say the least. It is simply the wrong tool for doing a job. However, I can appreciate why many consumers are buying tablets and not buying PCs. One reason is that some do not really use their computer to do real work, anyway: it’s a glorified web browser. Another reason is that consumers have a PC and are buying tablets to augment those PCs. The latter class of consumers will likely upgrade when there is a compelling reason. Windows 8, unfortunately, is not a compelling reason. Further, Windows 8 is horrible for the enterprises that depend on desktop applications to get real work done.

I do not know what Microsoft has in store, but contrary to what many are suggesting, I think Microsoft should kill the Windows 8 Pro tablet and focus only on Windows RT. That model has proven to work with both iPad and Android tablets. Application developers can build great Windows applications on Windows RT. At the very least, get rid of the desktop mode on the tablet. At the same time, Microsoft needs to do something with Windows 9 so that it is once again a usable desktop operating system for those who use a PC either at home or at work.

I want a tablet to do things I would do on a tablet. I want a desktop operating system to do things I do on a desktop machine. The two do not have to be precisely the same platform, but what I would want is to have Microsoft Office on both. That is the one piece of software that Microsoft holds in its hands that can make it stand out above all competitors. And I don’t think I’m alone here. I suspect that Microsoft could probably “own the market” on both the desktop and tablet, but their current offerings in both are a bit lacking. However, I’m positive they will get this sorted out.

Permalink: Microsoft Will Remain the Leader in the Computing Industry

Microsoft Product Activation Sucks

March 26, 2013

Today for no sane reason, Microsoft Office on my PC decided that it was not activated, even though it has been installed and activated on my machine for well over a year. It opened a window and told me I had 28 days to activate it. So, what’s wrong? I tried to re-enter the activation key and Office would tell me that my key was not valid.

This became a very long process. I’m going to detail everything below, but I don’t blame you for wanting to skip the details. It’s pretty dry reading. The bottom line is that Microsoft’s product activation crap screwed up and, due to no fault of my own, cost me several hours. I counted at least 4 hours of wasted time. That did not include writing this up, but this just helps me feel better. :-)

I went through various steps in the troubleshooting page that the product activation window directed me to on Microsoft’s web site. Everything seemed to check out fine and I finally get to a page that says “Microsoft Customer Service may be able to help” and I was presented with a phone number. I call the phone number and select the “business customer” option. I get to a lady who asks if I have a support contract. I’m sure my company does, but I don’t know what it is. She said that it would cost over $500 to provide support to me! My gosh! Seriously!?! I told her the nature of the problem and she went ahead and tried to help me out. She never asked for credit card information, though. I trust that was a free call. I’m still beside myself that a single support call would cost more than the product itself!

She asked me for the product key, the serial number on my physical disc, etc. She verified everything and said it all appeared valid and she then said she will transfer the call to the product activation team for further assistance. She said that is who I should have called in the first place. Well, I would agree, but I have no idea why Microsoft’s troubleshooting guide related to product activation would lead me to customer service.

I then get transferred to some fellow named Raul. He asked for some number that I had never heard of before. I asked what number he was referring to and he just countered my question with “You are trying to activate Office 2010, right?” I said yes and he then made the same request for some number. I again told him I have no idea what number he’s referring to, so he said he was going to transfer me to customer support. “But, I was just there and they transferred me to you!”, I told him. I asked why I should be transferred to customer support when the issue is product activation. He put me on hold for a moment and the next thing I know I am transferred to customer support, but this time they tell me that the customer support is closed. Nice.

So, I called back. The lady I spoke to in customer service before gave me the direct number (+1 866 432 3012). I spoke to a fellow there and we went through the motions again. Finally, he said he needed to transfer me to customer support. I told him that customer support was closed already, but he insisted and asked me to stay on the line. I did, but then the line dropped. I’m not sure if he accidentally dropped the line or hung up on purpose. After being tossed around like a volleyball, I do have to wonder.

There’s apparently a problem somewhere and I don’t think he can resolve it. He did confirm that my product key was valid, but for whatever reason, Office 2010 is telling me otherwise.

It was late at night. I went to bed.

The next day, I called Microsoft’s activation number. Again, they ask me two times to read the product activation key and they tell me that it validates. Now, just in case you are not aware, those product activation keys are very long alphanumeric sequences. To say that I’m tired of repeating this number is an understatement. After verifying that it validates, I was transferred to customer support. I was also provided with a direct number to customer support (+1 800 936 5700) and a case was opened for support (case # 1201074501). The wait time is 33 minutes, but fortunately there is an option to allow Microsoft to call me back.

In the meantime, I was curious as to whether communication was really happening between my PC and Microsoft’s activation servers. Oddly, I could not see any. I used Wireshark to try to watch traffic between my PC and Microsoft. I saw IP packets going to several places that I could identify, but none going to Microsoft. So, is Office even communicating with Microsoft’s servers? I disabled the network interfaces on my laptop and got the same results. Office isn’t even talking to Microsoft!

I got a call back from Microsoft. The support person said they need to reinstall Office 2010. Why!?!? How did it break? I didn’t break it. Nothing new has been installed, except for the updates that come down from Microsoft. So, did Microsoft break Office in some Windows update? In any case, I could not wait on an update, because I had a meeting to attend.

The Microsoft representative called me back after the meeting and uninstalled Office and re-installed it. Interestingly, he said he had to use a new activation key. He blamed the activation server for something, but it was not clear to me as to what the issue was. All I know is that he used a new key, but interestingly he used the old installation binary. So, I’m wondering why the re-install was even needed. If he had to change keys for some external reason, then I don’t understand the need to re-install the software.

He also applied a number of Windows registry changes, change the user account control settings to be insanely generous, changed permissions in the temp directory, and modified my preferred folder view settings. I asked him to change the folder permissions back, but I still had to go change the other things myself and double-check settings. I then had to re-download a bunch of patches via the Windows Update.

Next up, a bunch of applications got screwed up somehow. From what I can tell, some important files in the c:\windows\installer directory were deleted in the process. Damn! How many hours of my day am I obliged to waste because of this crap? I’ve wasted several already and it looks like I’m going to have to re-install a bunch of applications.

Permalink: Microsoft Product Activation Sucks

PDF Creation, Editing, and Viewing

March 18, 2013

It seems that with each passing year, the value of PDF is increasing. It is the common document format that allows one to perfectly view a document on a wide variety of computing platforms, from PCs and Macs to tablets and mobile phones. In recent years, though, Adobe Reader has been plagued with issues. Actually, I’ve been fortunate enough to have never encountered those issues, but Adobe sends out updates from time-to-time wherein they patch some security vulnerability. So, there are definitely issues.

Adobe’s most recent update to their Windows version of Reader did present me with a problem, though: I could not open two PDF documents at the same time. It did not matter what I did, it was impossible to open a second document.

So, I went looking for an alternative PDF viewer for my PC. I decided to give Foxit a try. I had heard about Foxit a few years ago, but did not really know how well it compared to Adobe. I was impressed right away. It seemed to load quickly, render quickly, scroll quickly, and I could not find a document that presented rendering issues. (Yeah, that is the one ugly truth about PDF. It’s universally available, but largely because Adobe Reader is universally available. In my experience, other software does not always consistently render PDF documents properly.)

I’ve been using Foxit every day now for a few weeks. It’s a really great PDF viewer. So far, I have not found a single instance where it failed to properly render a PDF document.

Permalink: PDF Creation, Editing, and Viewing