Packetizer Logo
 

Paul E. Jones' Blog

Google+ Opens to the Public

September 20, 2011

Today, Google opened Google+ to the public. I've been using Google+ for a while. It presents a very different user experience than Facebook. It seems lighter and more relaxing. Then again, it might be because it has largely been a ghost town since opening for private beta. Still, it has millions of users and a fair number actually do post with some regularity. Overall, it looks better than Facebook, in my opinion. It is missing a few useful features, though, such as the ability to create groups of people who are not otherwise in one's circle (e.g., a team of people collaborating on a project or similar). It also lacks the concept of a "Facebook Page", which suits me fine. The whole darn Internet is supposed to be for posting content. Facebook Pages remind me of AOL's attempt to own all on-line content.

In any case, I have a Google account and a Google Profile URL that's so simple and easy to remember. It's https://plus.google.com/103173924987331945891 :-)

Permalink: Google+ Opens to the Public

PDF over SMTP to Replace Traditional Fax

August 22, 2011

I really enjoyed being a part of the revolution that helped move traditional voice services from the PSTN to IP. Merely moving voice from a switched circuit network to a packet-switched network was not the reason for my interest in the field of multimedia communications, though. I was interested (and remain interested) because IP networks open up the door to a world of rich communication capabilities. With IP, there are so many more modes of communication that are possible. Concepts like the Advanced Multimedia System are really cool, where one can utilize a device (like a mobile phone) and communicate with various other devices on the network to realize a powerful and rich communication experience. One can utilize an electronic whiteboard on one device, while having a video stream on another device, and transfer a file in the background on a third device.

All the while, though, there is one ancient piece of technology that simply will not go away. As much as I wish it would, people still insist on using it. That technology is the PSTN facsimile machine.

I was also one of the people that helped to define the standard for transmission of Fax over IP (FoIP). To be fair, I was not the person who designed the first version of the protocol (known as Recommendation ITU-T T.38). Even so, I played a significant role in helping to ensure its place in the IP world. I did not do that work because I liked T.38, though. On the contrary, I have always been of the mind that T.38 did little more than perpetuate the PSTN and a better solution should have been delivered to the market. T.38 exhibits all kinds of problems, especially when there are multiple PSTN gateway hops in the call path. The protocol is very sensitive to end-to-end round-trip delay and, as a half-duplex technology, there are often collisions on the PSTN circuits that cause calls to fail. If that were not enough, some service providers do not provide proper treatment of the modulated signals, sometimes even running them through voice codecs! If you have experienced problems sending faxes, it might very well be due to the fact that the faxes are going over an IP network.

Still, I cannot fault the original designers. At the time the specification was first written, device capabilities were limited and the designers had to make certain choices. Moreover, many of the current-day problems with T.38 will disappear as fewer and fewer PSTN gateway hops are inserted in the end-to-end media path. I look forward to that day, but at the same time I have to ask, “Why do we live with fax at all?”

The answer to that question is simple. It is the reason I was asked to work on T.38, related session signaling protocol support, security enhancements, and so forth. Fax is an important part of day-to-day business for many businesses and government agencies. Many companies around the world rely heavily on their fax machines to get business done.

Even so, there is a better solution on the market and it has been there for years. It’s called “PDF over SMTP”. It is a very simple technology for end users to use, too. All one has to do is send an email and attach a PDF document. There are even multi-purpose devices sold in office supply stores now that will scan documents and email them as PDF documents to people anywhere in the world. It is just as easy to use as the legacy fax machines, provides the same or better security, reduces wasted paper, reduces cost, produces a higher-quality black and white or color document, and completely side-steps all of the transmissions problems that exist with legacy fax machines. Further, it is a completely standard solution to the document transmission problem!

Seriously, we should all switch over to using email to send documents and stop using the old fax machines. I am absolutely amazed that the world has not already moved away from that old technology, but my guess is that many people are simply unaware that there are many models of printers and scanners already on the market that have PDF over SMTP capability. Here is one such high-end model and a low-end model. Perhaps the problem is simply that nobody calls it PDF over SMTP. In fact, none of the vendors have a name for this capability. So, I plea to device manufacturers: call it PDF over STMP so customers know they can get away from the old fax machines they are currently forced to use.

Permalink: PDF over SMTP to Replace Traditional Fax

End-to-End Session Identification of Multimedia Sessions

November 28, 2010

One of the challenges we’ve often faced with IP multimedia communications systems is that of session identification. H.323 has the concept of a “Call ID” and SIP has the concept of a “Call ID”, but they are not compatible with each other. As such, it’s impossible to allow a session to be identified end-to-end. This opens up the possibility of getting into call loops, etc.

Aside from avoiding network issues like call loops, though, there is also a desire to be able to identify a session end-to-end (even through interworking functions), track a session as it is transferred, identify sessions that are part of the same multipoint conference, and associate media flows with a signaling session.

In H.323, we introduced a field (called CallLinkage) that made an attempt at tracking calls as they were transferred. However, the rules for how to populate those fields were complex and, in the end, few implemented the procedures. Still, what would work for H.323 did not work for SIP.

Within H.323, we have the ability to associate all devices that are part of the same conference, since all participants use the same conference identifier (CID). Well, so is the theory. In practice, though, most multipoint conferences are really just a logical association of point-to-point sessions. So, the conference identifier in H.323 has not been use consistently. And, as before, what would work for H.323 does not work for SIP.

Management systems, policy enforcement points, and other network elements might also want to associate session signaling with media flows. To do that, somehow those two need to be correlated with each other. There are ways we could allow that to work for SIP or H.323, but there are real-world implementation issues in trying to do that. Further, selecting a mechanism that works for SIP or H.323 would not allow this to work end-to-end.

In recent weeks, there has also been a lot of activity on the SIPREC mailing list in the IETF where folks recognize a need to have some kind of session identifier, and what currently exists for SIP is not workable.

For these (and other) reasons, some colleagues and I put together an initial draft of a new Session Identifier that can be used by H.323, SIP, or even the forthcoming H.325 multimedia communication systems. Feel free to give us feedback!

Permalink: End-to-End Session Identification of Multimedia Sessions

More gTLDs are on the Way

October 12, 2010

I just read an article on CNET about expansion of the gTLD namespace. Is this really a good idea?

We already have domains like .museum, .aero, .biz, .jobs, .mobi, .travel, .pro, etc. How many of these do we see being used in practice? I have seen a few such domains, but very few. So, why do we think opening up more gTLDs will encourage people to use them? I doubt it will.

The biggest frustration with .com is the fact that so many names are taken, quite often by somebody sitting on the name to earn ad revenue. If that behavior were stopped, we would not have an issue with .com. After all, if one wanted to create a company called “Foo” and discovered that “Foo” was already used by another legitimate business, that person would probably want a different name just to avoid confusion. Having foo.com and foo.biz is not helpful to the business with that name. Having foo.* only exasperates the problem.

Do we really want or need foo.plumbing and foo.computers? Imagine if Apple's domain was apple.computers. What would the company do when it decided that it was no longer a “computer” company and wanted to present itself more broadly? Oh, apple.com. So, why not just start with .com in the first place?

I think ICANN should put more effort into removing domains that exist solely for the generation of ad revenue. I believe that would address the concerns people have with limited name space.

Permalink: More gTLDs are on the Way

Federated Identity with OpenID

April 18, 2010

For most sizable corporations, there is a desire to be able to federate identity for users within the corporation to third-parties. For example, your company might provide travel services to employees through a third-party portal, allow employees to view paychecks on a third-party web site, etc. To allow this to be done safely and securely, the two businesses must work together to come up with a way in order to authenticate the corporate user. All too often, though, the third-party has absolutely no hand in the authentication step and merely trusts that a URL from the company, that somehow identifies the user, is legitimate.

Some of the inter-domain federation mechanisms are really, really insecure. I’ve seen some that are so bad that all one needs to do is grab the URL and use it to access an employee’s confidential information.

OpenID can be used to address this issue securely and without using proprietary mechanisms. Rather than authenticating the user and redirecting the user to a third-party with some kind of trust “credentials” inside the URL, the company can just redirect the user to the third-party and provide the user’s OpenID identifier. For example, when redirecting the user to the corporate travel site, perhaps this might be the URL used:

Source Code

http://travelsite.example.com/?openid=http%3A%2F%2Fopenid.packetizer.com%2Fpaulej

The receiving travel site will receive the OpenID identifier http://openid.packetizer.com/paulej and can then go through the normal OpenID procedures to authenticate the user with the corporation’s OpenID server. This is far better than passing “credentials” around via URLs. Equally important, the method is very simple and secure. Further, it removes the need to create and manage a host of proprietary mechanisms between various “trusted” third-parties.

Permalink: Federated Identity with OpenID

OpenID: The Internet Login Problem Solved

March 31, 2010

For years, I’ve been frustrated with the fact that I have so many login identities and passwords for literally hundreds of web sites. Of course, I’m certainly not alone in that respect, as just about everybody has way more logins than they can deal with. The problem has been so bad that I laugh when I read a notice on a web site that says to make your password secure, using mixed-case and special characters. Say, something like this: rS3YC%e@6. Oh, and don’t write it down, is the advice given. Memorize it. And don’t use the same password on two sites. Yeah, right. Nobody can do that. Perhaps a person can manage to do that with one or a very few passwords, but not hundreds of passwords.

That is why I am so delighted to see OpenID taking off. For those who do not know, OpenID solves this problem by giving users a single user identity and password that they can use to access all of their OpenID-enabled web sites. Right now, sites like Blogger, Facebook, Flickr, MySpace, WordPress, Google, Yahoo!, and many more sites have support for OpenID. We even have OpenID support for Packetizer Forums, so you don’t have to keep up with a separate ID to discuss stuff about VoIP, videoconferencing, or cloud computing.

One of the really cool features of OpenID that is not fully appreciated is the fact that it is possible for you to log in once and never have to log in again. You can roam around from site to site and be automatically logged in. Perhaps you might have to provide your user ID, but you don’t have to provide your password if your identity provider supports the right features in OpenID. With the OpenID software we use on Packetizer, you can indicate when you log in the first time whether you want to be prompted for a password again or not.

Anyway, I’m excited since more and more web sites are adding support for OpenID. This makes using the Internet so much more enjoyable.

Permalink: OpenID: The Internet Login Problem Solved

Obama Signs Travel Promotion Act Into Law

March 8, 2010

Recently, President Obama signed into a law a bill that would help encourage tourism to the US. Called the Travel Promotion Act, every visitor to the United States will be required to pay an entry fee.

That certainly does not sound like it would encourage tourism to the US. “Welcome to the US. Now, pay your fee.”

There is, of course, more to the bill than that. The bill is intended to create a marketing organization and a set of programs to help encourage visitors to come to the United States. This is all fine, but why do people not want to come to the United States, anyway?

Well, there are reasons. First, many visitors to the US must get a visa. There are certain countries where this is not required, but even in those cases, visitors are required to go online and complete some entry form, otherwise they will be denied entry.

All visitors are required to fill out far more paperwork than any other country in the world (the white I-94 or green I-94W form), staple crap into their passports, and are required to ensure that the departure document is properly returned before departure. Failure to do that might jeopardize the visitor’s ability to come to the US again, and will certainly increase the likelihood of having to get a visa the next time the visitor might wish to come to the US.

Upon arrival, US immigration officers will take the visitor’s mug shot and fingerprints, almost as if the person arriving is assumed to be a felon. Sure, all of this was done to “protect national security” after the attacks of 9/11, but does the leadership of the United States not understand how this makes a visitor feel?

Oh, and have you ever watched how our border agents (especially those running the scanners) treat visitors when they do not follow instructions? They are absolutely rude. It does not matter that the visitor cannot speak English. They don’t care. They’ll just yell at the visitor as if they’re a common criminal. I’ve seen it happen far too many times.

So, because we put up so many barriers to people who want to visit the United States, people do not come. And then we wonder why. So, we create a marketing organization and fund it to encourage visitors.

Do the folks in Washington not understand the root problem? I can hop on a plane and fly to Switzerland tomorrow. I’ll be greeted with a smile, not required to fill out any documents at all, and given no harassment whatsoever by the border agents. There is a difference and people feel it.

Lots of folks I work with in the ITU-T, which is a part of the United Nations, have explicitly asked me to not hold meetings inside the United States for all of these reasons -- and so I don’t. Perhaps one day our government will stop treating visitors so poorly.

Permalink: Obama Signs Travel Promotion Act Into Law

Versioning REST Interfaces

March 8, 2010

Web-based interfaces (also called APIs) are growing in popularity. Some are very trivial, such as an interface that returns a list of stock quotes, while others are more complex. There are a number of technologies in use today, but what I’ve been focused on primarily is REST.

REST is a very simple idea, but there are so many opportunities for making it complex. One example is REST Versioning. We posted some recommendations on how to deal with REST versioning and we also have some discussion of XML versioning to support that.

Permalink: Versioning REST Interfaces

Secure HTTP Cookies

March 2, 2010

Some people love them and some people hate them, but HTTP cookies are really darn useful. Cookies are the way web sites remember who you, avoiding the need to enter and re-enter your password from day to day. Cookies can also store information, such as user preferences on a given computer.

Facebook, blogs, and many other public web sites use cookies. They can be misused, of course, such as by advertisers who track your movement around the Internet. However, most uses of cookies are not so invasive. But, there is a big problem with cookies: they’re entirely insecure.

Cookies might be encrypted by the server, so certain information in the cookie might be secure. However, when cookies are used to maintain a user’s logged in state, the user is at risk of having his or her session stolen. This happened recently with Facebook, in fact. However, what if a hacker is able to grab a user’s session and post malicious content or delete photos or do other destructive acts in the name of the user? This is scary.

What solutions exist? Presently, the only solution is to use HTTPS. HTTPS will encrypt all traffic to and from the client. Unfortunately, HTTPS comes with a high price: encrypting and decrypting every single bit of data between the browser and server is expensive.

So, a colleague and I thought about a better approach and we came up with a proposal for just encrypting a part of the cookie information passed between the client and server. Have a look and tell me what you think. The draft can be found here.

Permalink: Secure HTTP Cookies

H.323 and SIP Debates Rage On

February 19, 2010

You know, back in 1999 there were rather heated debates over H.323 vs. SIP. Then, there were claims that H.323 was dead. (Jeff Pulver said that: I heard it with my own ears.)

Roll the clock forward to 2010 and we still hear the same things. OK, perhaps it is not being declared dead, but some view H.323 less favorably for whatever reason. Is it because SIP does something H.323 cannot do? Nope. H.323 does everything SIP can do and more.

Perhaps one day SIP might be a major success. After all, if 5,000 people start charging through a concrete block wall thinking they can run through it, they will likely succeed. There might be a few casualties on the front line, but they'll knock it down. And so it is with SIP.

The reality is, though, that H.323 continues to be deployed and it dominates the videoconferencing market. As chair of the H.323 standards committee, I'm still actively engaged in the development of H.323 and spending some time looking forward now to H.325.

I will not try to sell you on the concept of H.323, since it is a well-established protocol. But, the new XML-based H.325 is really exciting. If you wish to know more about it, by all means, ping me.

In the meantime, let the debates continue! This is quite the spectacle! I'm eager to see what things will look like in another 10 years. :-)

Permalink: H.323 and SIP Debates Rage On