Packetizer Logo
 

Paul E. Jones' Blog

Delegating ENUM Resolution Resonsibility

March 29, 2012

One of the biggest challenges with respect to getting ENUM deployed is politics. Everyone wants to control the numbering plan because they either see money in owning the numbering, they do not want to be liable for service outages due to reliance on some other entity, they fear a loss of control over numbers they control, or something other. In any case, it has been very difficult to move the world to ENUM. Well, I’m here to ask the question, “Why worry about it?”

Dialed Digits is an ENUM service provider. It’s one of many ENUM service providers around the world. One can query Dialed Digits starting at the room of the ENUM tree at enum.dialeddigits.com. Dialed Digits can delegate a portion of the ENUM tree management to another organization for management, too. This can be done by simply inserting NS records into DNS like this under enum.dialeddigits.com:

Source Code

$ORIGIN enum.dialeddigits.com
4.4 IN NS ns1.packetizer.net.

This DNS record basically says that all of the digits for the UK can be queried via ns1.packetizer.net. (Please note that record is entirely fictitious. Packetizer does not manage the phone numbers for the UK.) So, if I were trying to contact the web team for Buckingham Palace in London at +442079304832, then a query would be sent for 2.3.8.4.0.3.9.7.0.2.4.4.enum.dialeddigits.com. Seeing that +44 is handled by ns1.packetizer.net, a query would be directed to that server, but still looking for 2.3.8.4.0.3.9.7.0.2.4.4.enum.dialeddigits.com. However, what if BT is the owner and manager of those phone numbers? It probably is. Further, I’m quite certain that BT is going to want to manage its numbers the BT domain name, perhaps at enum.bt.co.uk.

So, how do we tell the ENUM resolution engines to go look in an entirely different domain? Here’s my proposal. We should use NAPTR records and introduce a new flag “x” that signals that responsibility for queries have been “transferred” to the specified domain (or sub-domain).

For example, under enum.dialeddigits.com we might have a record that looks like this:

Source Code

$ORIGIN enum.dialeddigits.com
*.4.4 IN NAPTR 100 10 "x" "" "" enum.bt.co.uk.

So, when a phone number like +442079304832 is resolved, an answer will come back with an NAPTR record that effectively says “go ask again over at enum.bt.co.uk”. And, so the query is re-issued under the specified domain.

What this allows service providers to do is, through DNS and ENUM procedures, to define who the authority is for a given any given digit string and to delegate management to them. There is no need to rely in a central e164.arpa. There would be a need to establish peering relationships, but this approach would actually allow one to rely on any number of companies to provide that peering management.

Imagine if Dialed Digits were the “fallback” service for AT&T. Perhaps AT&T might manage ENUM services for all of its own numbers and might insert “transfer” NAPTR records for some numbers owned by service provides with whom the company has a direct peering relationship. But, for all other digits, it might rely on Dialed Digits to establish those peering relationships and provide the appropriate NAPTR records (either answers to queries or further “transfer” records).

UPDATE: I was exchanging email with Patrik Fältström on this topic. He suggested that, rather than introduce a new NAPTR record, just use DNAME records. That's really a far simpler way of delegating. So, an ENUM provider might have a DNS entry that looks like this:

Source Code

$ORIGIN enum.dialeddigits.com
4.4 IN DNAME 4.4.enum.bt.co.uk.

This would mean that any query for +44 numbers would be directed to BT's ENUM tree at 4.4.enum.bt.co.uk, if such an address existed.

Permalink: Delegating ENUM Resolution Resonsibility

Using XMPP with VoIP Protocols

March 20, 2012

As many know, I am a big advocate for enabling a plurality of devices and applications to be used together as a part of a multimedia communication session. That is the whole idea behind the work the ITU is presently doing with respect to H.325 (or AMS). AMS is aims to be the next generation multimedia communications protocol for the now aging SIP and H.323 protocols, both of which are now 16 years old. While work on AMS is still progressing, there are things we can do in the interim to make it easier to integrate some applications, perhaps most important is text and voice/video.

XMPP is the international standard for instant messaging and presence. It is widely used within enterprises around the world and used by services like Google Talk. Due to its design, it has the potential to be as ubiquitous as email is today. And like email, it fully allows for federation between different domains. With XMPP, it is just as easy to have an instant messaging (IM) session with a colleague as it is anybody around the world.

H.323 and SIP are the two leading voice and video communication standards in the market today. H.323 is still the most widely used protocol for videoconferencing, while SIP is primarily used as a voice “trunking” protocol between enterprise and service providers. In the core of the service provider networks, both H.323 and SIP are employed, with SIP perhaps now leading as the replacement as a pure voice replacement.

It is becoming increasingly possible to use VoIP (voice or video) to place calls between colleagues and with other people around the world. Since VoIP generally means “voice” in my mind, I prefer to use a more generic term of IP Multimedia Communications (IPMC), of which voice, video, instant messaging, whiteboarding, etc. are all a part. So, I’ll use IPMC below, but you can think of that as “VoIP” if you prefer that term.

When I initiate an IPMC session, it usually offers only a single mode of communication. Quite often, it is just a voice or voice/video call (admittedly, that is two modalities) or instant messaging. Rarely do we have the ability to initiate one session (e.g., voice) and have the ability to use instant messaging with that, especially if the two applications are not a single unified application. For example, if I make a call using my IP phone, my IM client has no idea that I’m talking to somebody. Likewise, if I am carrying on a few instant messaging sessions, my IP phone is oblivious to this fact.

What we need is a means of better integrating voice/video applications with XMPP. There was some work that started in the IETF to do this, but I do not think that work progressed too far. Nonetheless, I think it is important work and I figured I would write up my thoughts here.

We have two problems we need to solve:

  • My voice/video phone (desk phone or soft client) needs to know when I have an instant messaging session active with somebody so that I can just press a button to launch a voice call, and it needs to know the voice contact information for the other person
  • My instant messaging client needs to know when my voice/video phone is in an active call with somebody, and it needs to know the XMPP JID (the user’s identity) for the person with whom I am having a conversation

From these two requirements, we can see there is a need to share addressing information and there is a need to convey some presence state between the phone and the instant messaging client.

One way to convey addressing information is to simply advertise it within the protocols themselves. For example, when I configure my voice application, I could tell it my XMPP address. Likewise, when I configure my XMPP application, I can tell it the URI for my voice/video application. That’s pretty simple. You can imagine in SIP, for example, that we might introduce a header like this:

Source Code

IM-Address: xmpp:paulej@packetizer.com

In fact, XMPP already defines the means through which addresses can be advertised for other applications.

A small addition like this to SIP and H.323 would allow me to call you, for example, and immediately know your XMPP address or your voice/video URL. One could also advertise one's H.323 or SIP URI via XMPP, too. If I have XMPP and voice/video integrated into a single application, that would be all I need to know in order to quickly launch a different mode of communication right from within my application.

Often, though, these applications are separate. So what we need is a means of allowing the voice/video application and XMPP application to convey their status information to each other. A very reasonable way to do that is to re-use XMPP. After all, XMPP was designed to be a presence protocol. It has the ability to learn and maintain state information related to various presentities (“presence entities”).

Now, with the phone knowing about active IM sessions and the XMPP client knowing about active voice/video sessions, it is now trivial to initiate new modes of communication with the touch of a button. If I call you using my phone, my IM client would know I am on a call with you. I could press a button on my IM client that corresponds to the active voice call and use instant messaging without ever having to manually enter an address.

There are also ways for clients to learn about addressing information for users automatically, too. For example, rather than tell my phone my JID, we can use technologies like WebFinger. Using WebFinger, it would be possible for my phone to query to learn the other addressing information related to me. Further, it would be possible for the person I call to learn my other addresses (IM, voice, email, etc.).

It is also possible to map telephone numbers to WebFinger account URIs using ENUM. So, it would be possible to convey only the phone number and then discover all of the other addressing information related to a user.

WebFinger makes it very easy to discover information about another person, but I realize that some people might be concerned with privacy. Therefore, WebFinger should be considered as one option and not the only solution. Still, it is one option to make provisioning significantly simpler.

ENUM could also be used to map a phone number to an XMPP address only. However, since we would still need to have the ability to map from an XMPP address to a phone number, we need to either advertise addresses via the session protocols or use WebFinger. I’m open to other recommendations.

Permalink: Using XMPP with VoIP Protocols

Amazon EC2: Creating EBS-backed Instances with Ephemeral Storage and Automatically Deleting the EBS Storage upon Termination

March 12, 2012

I use Amazon EC2 extensively. One of the things I noticed over the past couple of years is a move from instance-store to EBS-backed instances. I’ve read the literature on EBS-backed instances and, quite honestly, I don’t care about the benefits. If an instance dies, I can re-start is and have it up and running in no time, since virtually everything is scripted. That said, I’m not going to fight the trend.

One thing I do miss, though, is that instance-store instances have a large chunk of ephemeral storage available for use for free. With EBS-backed instances, the ephemeral storage is usually not available. It is, though, if you go through the motion of creating your own AMI or find one configured as outlined here.

To take the easiest route, launch an EBS-backed instance of the AMI you’d like to use with ephemeral storage. Make whatever changes you wish to it once you have it running. You might want to add this to the /etc/fstab, adjusting the device name and filesystem as required for your version of Linux:

Source Code

/dev/xvda2 /mnt ext4 defaults 1 2

Now, stop the instance and take a snapshot of it. The snapshot will be our new AMI when done, so it will persist as long as you want to keep the AMI around.

Then execute the following command:

Source Code

ec2-register -n AMI_Name -d AMI_Description -a PLATFORM --kernel KERNEL --ramdisk RAMDISK --root-device-name /dev/sda1 -b /dev/sda1=SNAPSHOT_NAME:10:true -b /dev/sda2=ephemeral0

Each of the variables above are defined here:

  • AMI_Name: A friendly name you assign to the AMI
  • AMI_Description: A longer description you assign to your AMI
  • PLATFORM: The platform, either "i386" or "x86_64"
  • KERNEL: The kernel ID of the kernel to use, which can be found using ec2-describe-images or observing the kernel used while the original instance is running
  • RAMDISK: The ramdisk ID to use, which should also match that specified in ec2-describe-images or observing the one used while the original instance is running (this is often not specified)
  • SNAPSHOT_NAME: The name of the snapshot you created above. Note the '10' following indicates the size of the EBS volume to create for the root filesystem and 'true' means that the volume should be deleted when the instance is terminated (you may prefer to set this to false)

One of the other things I really do not like about EBS-based instances is that when you terminate them, the EBS storage is left behind and you have to clean that up separately. Using “true” as a part of the -b parameter means that the EBS storage will be deleted automatically when the instance is terminated.

Note that the ec2-register command will return the name of your new AMI.

Permalink: Amazon EC2: Creating EBS-backed Instances with Ephemeral Storage and Automatically Deleting the EBS Storage upon Termination

Replaced WinZip

February 10, 2012

I noted at the end of December that I was pretty fed up with WinZip. That feeling has not changed. How could it after they installed a toolbar against my wishes that was almost viral in nature? How could I after I found so many things installed in the WinZip directory that were not things I purchased or was even told about during the install process (like the "registry cleaner" and such)?

I went on a search for a new tool. I looked at 7-Zip and PowerArchiver. I finally settled on PowerArchiver. Both are good tools, but PowerArchiver has a few more features that I will use.

If you're looking for a good replacement for WinZip, I think you should take a good look at both of these tools.

Permalink: Replaced WinZip

New Domains Make .com Irrelevant

January 24, 2012

As many have undoubtedly heard, ICANN has decided to open the domain name floodgates by allowing the registration of all kinds of new “top level domains” (TLDs). Rather than having just .com, .org, .net, the various country TLDs, and the very small number of newer TLDs like .mobi, people will be able to register domain names that could be anything from .cars to .planes to .crazy.

I do not know how things will turn out, but I do see a few interesting things ahead.

First, there will definitely be those abusing trademarks. I can already imagine that various names like pepsi.cola or bmw.cars or apple.core might be registered by individuals or companies that do not own the rights to those names. A company is certainly not going to want to manage a portfolio of hundreds or thousands of domain names just to ensure that its name is not misused. Unfortunately, this might prove to be a real challenge unless ICANN sets some strict policies that are favorable to trademark holders. (Yes, they have policies now, but I fear they will be insufficient when there are so many battlefronts.)

Why even bother with so many new TLDs? Sure, the .com namespace is very crowded. It’s extremely hard to find a .com domain name. Even so, what will be a good name in a sea of countless TLDs? If you create a web site called “acorn”, is there value in using the domain name acorn.misfire or acorn.toad? If those names are acceptable, then one would think acorn.biz is likewise a reasonable option, yet I see very few companies using .biz. People go out of their way to create silly .com names to avoid use of .biz. How about use of subdomains? I suspect there are domain owners that would be delighted to transform a good .com name into one that can serve as the parent domain for businesses, blogs, etc. Just imagine if owners of names like blogs.com, autos.com, news.com, etc. re-purposed their domains to serve vertical industries. Perhaps a business that fits no particular name might just pick something generic like acorn.gc.com.

An interesting consequence of the expansion of the TLD namespace is that search engines will become even more important. No longer will a person be able to just remember a name like “youtube” and have some level of assurance that the site is “youtube.com”. Going forward, the new GeeWow site might be anywhere. It might be GeeWow.quint. Now, who could have possibly remembered that? If people started using sub-domains, it might be GeeWow.qt.com. With so many TLDs and the possibility of using sub-domains, one will become even more reliant on search engines to help us find information. I suspect we will actually start to see less use of the address bar on browsers and more use of search engines.

Permalink: New Domains Make .com Irrelevant

SOPA Scared Us, but the Megaupload Affair Proves the US Government Exercises Unilateral Power

January 20, 2012

In case you did not hear the news this week, the US Government took down servers owned by Megaupload.com and worked with authorities in countries around the world to have several of the employees in the company arrested. According to Wikipedia, the company employed 155 people, yet there was no consideration given to the employees and families affected by the actions of the US Government. The Government did not give the defendants an opportunity to answer to the charges. Rather, they just shut them down.

As far as I know, I have never used Megaupload. I only learned about it through the news stories that came out this week. One article by CNN suggested that the site was used by people for all kinds of legitimate purposes. I asked my 17-year-old son if he had heard of the site, to which he positively replied saying he used it all the time. I asked him what for and he said that that’s how a lot of people distribute free mods for games. He said that to his knowledge, he had never downloaded anything from site that was not free for downloading, certainly no music or videos. Apparently, content is only accessible to people who know the URL to the content. What this means is that if the site was used for sharing illegal content, it was certainly not done on the same scale as that of organizations like the Pirate Bay where anybody can go search for content and download it. Per the CNN article, one cannot easily search the Megaupload site. To me, the fact that the site cannot be searched serves to counter any claim by the US Government that the site was primarily used for piracy.

What is really frightening is the fact that the government can so quickly and easily remove any server from the Internet. Is the law on their side? Or is the government overstretching its power? Whatever happened to the idea that a person is innocent until proven guilty? If the founders and operators of Megaupload really created the service primarily for illegitimate reasons, this would all come out in a court of law. Further, finding that the company and its operators were guilty, I would have no objection to the court then ordering that the servers be confiscated and service terminated.

Just the opposite happened here, of course. Not only did the Government shut down the service, but they did so even while the DMCA exists to protect service providers. Megaupload is registered with the Government as a service provider and claims to honor all DMCA take-down requests. I personally have a love for the DMCA, as I viewed that as going overboard in trying to fix a problem, but there is a provision in that law explicitly designed to offer protection to service providers whose users violate copyright laws. The US Government did not care to even respect that law.

What we see is the government going about this the wrong way. They “shoot first and ask questions later.” This seems to be an unfortunate trend with US law enforcement. I could cite so many examples here where I live where police officers have killed innocent people, only to offer up a shallow apology after the fact. What is wrong with America? At least they didn’t murder somebody over a supposedly stolen Platstation this time. (Yes, the police murdered an innocent young man over a stolen Playstation that was, in fact, not stolen at all. They shot him right through the front door of his apartment, killing the young boy and his dog.)

While I fully appreciate the Government’s desire to stop piracy and I support copyright holders, surely they cannot be so naïve as to believe that shuttering a site like Megaupload will address the problem. Piracy has always been an issue. When I was a kid, I remember kids would copy a cassette tape with songs from friends. They did the same thing with computer programs and such. The software industry worked to educate people about piracy. We also used technology to help reduce the rate of piracy (e.g., activation keys and registration processes). Even so, piracy exits. Honest people will be honest. Those who do not want to pay or cannot afford to pay for what they use will not pay for it, period. I will not call it “theft”, though. Unlike stealing a car or a computer, a digital “pirate” merely copies content for personal use without paying the creator of the content. It might mean the copyright holder is deprived of income, but one has to ask: would this person have paid in the first place? Most likely, they would not. After all, each person only has so much disposable income. So, 100 pirated movies, for example, do not represent loss of 100 x the price of a movie. Even so, I believe the MPAA and RIAA would like Congress to believe that is the case.

What I think will likely happen is that acts like this by the Government will push programmers to produce a technological solution to the problem. Today’s pirate networks, for example, utilize a peer-to-peer (P2P) technology. It is difficult to stop such networks, but they have weaknesses. It is possible for the RIAA, for example, to monitor such networks and to then file court cases against individuals. Sharing in a P2P network is not anonymous.

An evolutionary step for P2P technology might be to break up files into many small chunks and distribute those chunks throughout the Internet, storing them on the computers of those people who utilize the P2P network, making redundant copies, and encrypting everything so that no useful information is identifiable on any single node. In effect, nobody is sharing anything, but everybody is sharing everything. The challenge to making this successful would be in devising an approach to splitting the files into pieces and constructing a link chain that allows one to then re-assemble any content from some uniquely identifiable starting point, with every scrap of information stored only in the highly distributed P2P network. Perhaps the only means of accessing the content might be via a URI of the form dhtnet:ec116dcc-43b0-11e1-a81a-12313a003d32, where “dhtnet” refers to some Distributed Hash Table Network or other technology. That URI would not direct a browser to go to any particular web site or server, but would result in the P2P software sending queries through the P2P network and assembling the pieces of the corresponding content, following the links and re-assembling the chunks.

Whatever would the copyright holders do to fight that technology? If implemented properly, there would be nothing that could be done except to outlaw its use, which would be a challenge since I can see good legal uses for the same technology. It could be an avenue to exercise free speech by the oppressed, piracy, or even store your private and person files. Best of all, the whole system would be anonymous and files accessible only if you know the URI.

Copyright holders and the Government should not try to fight technology. Rather, educate people and take steps to improve the incomes of citizens. Doing that removes the financial barriers people face with respect to paying for content. Still, there will be some piracy. The real question is what level of piracy is acceptable? As long as a copyright holder says “none”, then they will never be satisfied.

Permalink: SOPA Scared Us, but the Megaupload Affair Proves the US Government Exercises Unilateral Power

Getting a Handle on Passwords

January 16, 2012

Passwords are really getting out of hand. Every web site you visit requires a password. Every password should be different. You should change your passwords from time-to-time. Every password should be very hard to remember, uhm, I mean use letters, numbers, and special characters so they are hard for hackers to crack or guess. Oh, and never write down passwords in a place where somebody might steal them.

While one solution might be to store your passwords securely in a bank vault, that is not a practical solution to managing passwords.

Risk of Using the Same Password

Of all of the rules that people are advised to follow, the one that says you should not use the same password on multiple sites is definitely one rule you do not want to break. While perhaps no web site intends to divulge your password, the fact is that web sites get hacked often and passwords are stolen. When a web site gets hacked that has your password on file and you use that same password for your bank account, watch out! You really should not use the same passwords twice.

Complexity of Passwords

Passwords should also be complex. However, passwords do not need to be as complex as some people might lead you to believe. Web sites that argue that you need a special character (e.g., #, $,%) in your password simply have not done the math to see that such a requirement is utterly useless.

What is important is that passwords are sufficiently long and cryptic. One should not use a password like “12345.” One should also not use a password like “wildcat”. Those are simply too easy to guess. If one can look in the dictionary and find your password, you need a better password. If you merely take a word or a name and a few numbers, that’s also not sufficient. While it might take you a while to guess a password like “sally123”, it would take a computer a very short amount of time to discover that password.

What is a good password length? And what characters should be used? The answer requires a little math, so please don’t fall asleep now. Oh, and the answer requires a little understanding of how a computer organizes information. That might put you to sleep, but keep reading and, if nothing else, just see the result.

Let’s assume you take all upper- and lower-case letters and the digits 0-9 and use those in your passwords. That gives you a total of 62 characters. Let’s assume you have a one-character password. That would mean a hacker would have to look at 62 different values to guess your password. Using password cracking software on a computer, cracking your one-character password would take a split second.

What about a two-character password? Using the same characters, the possible permutations would be 62*62 = 3844. Still, that is nothing for a computer. A hacker could still have your password cracked in the blink of an eye.

Clearly, you need something stronger. How do you measure the “strength” of a password? If you understand that, then it starts to become clearer.

Each character used provides a certain amount of “strength” to your password. Specifically, it provides log2(62) bits of strength, or roughly 5.954 bits of strength. Having a two character password would provide you with about 11.91 bits of strength. So, what is 11.91 in a form a human can understand? This essentially means 2^11.91 possible values, or 3844 possible values. That’s because, 2^(log2(62) * 2) is the same as 62 * 62. I’m not trying to make this challenging for the heck of it. You’ll see why it is useful to use logarithms. It’s important that you think of password strength in terms of bits, not the number 62.

What about a 5 character password? That would provide a bit strength of log2(62)*5 = 29.77. That’s improving, but a purpose-built password cracking machine would have that one cracked in 0.32 seconds. (Per Wikipedia, commercial products can crack 2,800,000,000 passwords per second on a standard desktop computer.)

So, we need more bits of strength. Still, how many bits? Perhaps the answer is another question: how many years would you like a hacker to consume trying to crack your password?

We know that the strength in bits of a password comprised of letters and numbers is log2(62)*n, where n is the number of characters in the password. Further, this means that 2^(log2(62)*n) tells us how many passwords the hacker will have to consider while cracking the password. We also know that a commercial product can crack 2.8x10^9 passwords per second and there are 31556926 seconds in a year. So, the number of years it would take to crack a password of length n using this computer would be 2^(log2(62) * n) / (2.8x10^9 * 31556926).

Let’s use this formula once with a password length of 12 characters. That would be 2^(log2(62) * 12) / (2.8x10^9 * 31556926) = 36513 years. I feel fairly comfortable that my bank account would be secure with such a password, don’t you?

Working the other way, we can compute how long the password should be to force the hacker to work a specified number of years. The formula is log2(years * password_cracks_per_second * 31556926) / log2(62) or log62(years * password_cracks_per_second * 31556926) (but who uses log62?), where “years” you want to make the hacker work. Let’s say you want it to take 100,000 years. Then, you would need a password that is at least log2(100000 * 2.8x10^9 * 31556926) / log2(62) = 12.24 characters long. Since you cannot use part of a character, you would just use 13 characters for good measure.

Let’s also remember that hackers have access to more than one computer. Perhaps they might employ 1000 computers to crack your password. Using the equation 2^(log2(62) * 12) / (2.8x10^9 * 31556926 * 1000), we can see it would still take a hacker up to 36 years. I still feel fairly confident. I do not know a hacker who would want to devote 36 years of computing resources of 1,000 machines to get any information I have.

Even so, let’s say I’m overly paranoid and I want to ensure it would take 100,000 years with 1000 machines. How long should my password be? We use the formula log2(100000 * 2.8x10^9 * 31556926 * 1000) / log2(62) to learn that the password needs to be 13.92 characters long. So, a 14 character password really packs a powerful punch!

Now, recall that I talked about how some web sites insist that you use “special” characters on the keyboard to make passwords “stronger”? On most keyboards, there are only a handful of additional characters available. Let’s assume you use a total of 93 characters, using letters, numbers, and various special characters. How strong would that password be? Well, the bit strength would be log2(93) or 6.54. That’s only a little better than 62 characters. So, a 12-character password with 62 different values has a strength of log2(62)*12 = 71.45 versus log2(93)*12 = 78.47. You can see that there is little difference. If the extra strength is important, just make your password one character longer. There’s no reason to require use of special characters on the keyboard, as it adds an insignificant amount of bit strength. Let’s consider that example again where we want to protect our password from attack for 100,000 years using a single computer. Recall that for 62 characters, we need 12.24 characters. With 93 characters, we need 11.15 characters. So, it’s 12 versus 13 characters. Big deal. It’s not worth the complexity forced on a human to type in the special characters.

So the bottom line is that anything more than 70 bits of strength (12 or more characters) is a strong password today, though I personally prefer using over 90 bits (16 or more characters).

Generating Strong Passwords

Now we know you need a password that is 12 characters or longer to be good. But they cannot be simple words. They need to be as random as possible. It would be advisable to use a program that will generate random passwords for you. Passwords should not be easily guessed and should truly be as random as possible.

Password Management

So, how in the world do you keep track of your passwords? You could write them all down on paper and keep it secure. That’s a valid option, but not portable. Will you take that piece of paper with you on trips? You could write them down in a computer file and take that with you. But what if the file or paper is stolen or lost?

One solution is to keep all of your passwords stored inside of some kind of password management program. Typically, these programs store all of your passwords, securing them with a single secret password that you do not write down. This is a reasonable solution to the problem, especially if the data is accessible while at home, work, or traveling.

Another solution is to use Single Pass. What Single Pass does is generate a unique, secure password that is 16 characters long using a single “master” password that only you know. It’s similar in concept to the password management software, but it actually does not store passwords. Rather, it generates them on demand. To ensure that each password is unique, you provide a “service name” when generating the password. Given your Single Pass password and Service Name, the same password is always generated.

The Service Name could just be the name of the web site or business (which is reasonable for lower-security applications), or it could also be a cryptographically strong string of random characters (useful for your bank account). In fact, I generate random strings for Service Names. I maintain a file that lists the “Service Names” I give to web sites, using this password generator. I then have a master password that I do not write down that only I know. In order to crack my passwords, you would either have to crack it using brute-force (and they are 16 characters long, meaning it would take 1000 high-end machines 539 million years to crack).

The good thing about this dual-password approach is that my Single Pass password is useless without the accompanying file that holds the Service Names. Likewise, the Service Names are useless without the Single Pass password. Should somebody steal either piece of information, I would still have plenty of time to go change my passwords. But I can remember my one password and I don't have to worry so much if somebody steals my list of Service Names.

Doing Away with So Many Passwords

In an ideal world, we would have a better login solution on the Internet than having to use passwords on all web sites. There is one solution on the Internet that has promise: OpenID. OpenID allows you to login to a web site using a single password stored at an OpenID Provider. There are many providers, such as Yahoo!, Google, and VeriSign. If you can place your trust with one of these companies, you can avoid the need to have a different password for every web site on the Internet.

Questions still abound as to whether OpenID will succeed. There has been pretty good adoption and I’m personally a supporter of OpenID. Unfortunately, support is not as widespread as I would like, and most web sites still do not support it. So, alternative solutions like Single Pass that work with the world's crazy password craze are a necessity.

Permalink: Getting a Handle on Passwords

Is WinZip Becoming Adware?

December 27, 2011

I've used WinZip for a very long time. I can’t say for certain when I first started using it, but I’ve been using the product since at 1998. Unfortunately, the latest version (v16) proved to be a huge step in the direction of being sleazy adware.

One of the first things I noticed when I installed WinZip 16 was that the icons had changed. There is this new “ZIP Send” icon that is ugly. For that matter, most of the interface elements are not as pretty as they used to be in previous versions of WinZip. The attractiveness of the user interface was actually one of the things I liked about WinZip. I did not let that ugly Z bother me, though. I was more interested in the functionality.

What really got under my skin was a new toolbar that got installed without my permission. This was a toolbar apparently created by an outfit called “Conduit Ltd” called “WinZipBar”. I didn’t want that installed! I am very careful to not install such things. I tried removing it from Chrome and it acted as if it was gone. Unfortunately, it wasn’t. Buried in the Windows Registry, there was stuff that allows the toolbar to reappear. I discovered this when I tried out the “new user” feature in Chrome. That toolbar came back! This software was supposed to have been removed from my system, yet it was there! It took time, but I believe I found all of the necessary registry entries and files on the hard drive. It better be gone.

I went on with WinZip 16, but discovered today that there is a “WinZip Quick Pick” running in the task tray. Again, that’s not something I want, so I tried to remove it. As you can imagine, you cannot remove it. At least, I could not remove it. I could get it to shut down, but it would come back after rebooting the machine.

As if this nonsense wasn’t enough, I discovered that WinZip also installed other programs on my machine that I didn’t want. One was a “Registry Optimizer” from WinZip. I didn’t pay much attention to what the other was, but it also left me with the impression that WinZip is turning into those sleazy companies peddling unwanted adware and spyware.

I’m not accusing WinZip of doing anything unethical, but I want WinZip as a good file compression utility. I do not want a browser toolbar. I do not want a task tray program. I do not want a registry optimizer. I certainly do not want any of that stuff pushed onto my machine without my permission.

I uninstalled the software and the web site sought my feedback. I tried to provide it, but their server reported an error. That figures.

Permalink: Is WinZip Becoming Adware?

Google+ Opens to the Public

September 20, 2011

Today, Google opened Google+ to the public. I've been using Google+ for a while. It presents a very different user experience than Facebook. It seems lighter and more relaxing. Then again, it might be because it has largely been a ghost town since opening for private beta. Still, it has millions of users and a fair number actually do post with some regularity. Overall, it looks better than Facebook, in my opinion. It is missing a few useful features, though, such as the ability to create groups of people who are not otherwise in one's circle (e.g., a team of people collaborating on a project or similar). It also lacks the concept of a "Facebook Page", which suits me fine. The whole darn Internet is supposed to be for posting content. Facebook Pages remind me of AOL's attempt to own all on-line content.

In any case, I have a Google account and a Google Profile URL that's so simple and easy to remember. It's https://plus.google.com/103173924987331945891 :-)

Permalink: Google+ Opens to the Public

PDF over SMTP to Replace Traditional Fax

August 22, 2011

I really enjoyed being a part of the revolution that helped move traditional voice services from the PSTN to IP. Merely moving voice from a switched circuit network to a packet-switched network was not the reason for my interest in the field of multimedia communications, though. I was interested (and remain interested) because IP networks open up the door to a world of rich communication capabilities. With IP, there are so many more modes of communication that are possible. Concepts like the Advanced Multimedia System are really cool, where one can utilize a device (like a mobile phone) and communicate with various other devices on the network to realize a powerful and rich communication experience. One can utilize an electronic whiteboard on one device, while having a video stream on another device, and transfer a file in the background on a third device.

All the while, though, there is one ancient piece of technology that simply will not go away. As much as I wish it would, people still insist on using it. That technology is the PSTN facsimile machine.

I was also one of the people that helped to define the standard for transmission of Fax over IP (FoIP). To be fair, I was not the person who designed the first version of the protocol (known as Recommendation ITU-T T.38). Even so, I played a significant role in helping to ensure its place in the IP world. I did not do that work because I liked T.38, though. On the contrary, I have always been of the mind that T.38 did little more than perpetuate the PSTN and a better solution should have been delivered to the market. T.38 exhibits all kinds of problems, especially when there are multiple PSTN gateway hops in the call path. The protocol is very sensitive to end-to-end round-trip delay and, as a half-duplex technology, there are often collisions on the PSTN circuits that cause calls to fail. If that were not enough, some service providers do not provide proper treatment of the modulated signals, sometimes even running them through voice codecs! If you have experienced problems sending faxes, it might very well be due to the fact that the faxes are going over an IP network.

Still, I cannot fault the original designers. At the time the specification was first written, device capabilities were limited and the designers had to make certain choices. Moreover, many of the current-day problems with T.38 will disappear as fewer and fewer PSTN gateway hops are inserted in the end-to-end media path. I look forward to that day, but at the same time I have to ask, “Why do we live with fax at all?”

The answer to that question is simple. It is the reason I was asked to work on T.38, related session signaling protocol support, security enhancements, and so forth. Fax is an important part of day-to-day business for many businesses and government agencies. Many companies around the world rely heavily on their fax machines to get business done.

Even so, there is a better solution on the market and it has been there for years. It’s called “PDF over SMTP”. It is a very simple technology for end users to use, too. All one has to do is send an email and attach a PDF document. There are even multi-purpose devices sold in office supply stores now that will scan documents and email them as PDF documents to people anywhere in the world. It is just as easy to use as the legacy fax machines, provides the same or better security, reduces wasted paper, reduces cost, produces a higher-quality black and white or color document, and completely side-steps all of the transmissions problems that exist with legacy fax machines. Further, it is a completely standard solution to the document transmission problem!

Seriously, we should all switch over to using email to send documents and stop using the old fax machines. I am absolutely amazed that the world has not already moved away from that old technology, but my guess is that many people are simply unaware that there are many models of printers and scanners already on the market that have PDF over SMTP capability. Here is one such high-end model and a low-end model. Perhaps the problem is simply that nobody calls it PDF over SMTP. In fact, none of the vendors have a name for this capability. So, I plea to device manufacturers: call it PDF over STMP so customers know they can get away from the old fax machines they are currently forced to use.

Permalink: PDF over SMTP to Replace Traditional Fax