Paul E. Jones' Blog

Configuring DNSSEC on Your Domain

August 17, 2012

DNSSEC is the standard for providing security for your domain name in order to protect it from attackers who want to intercept communications by directing web browsers, email servers, etc. to destinations other than the correct destination. Enabling DNSSEC is actually very straight forward. I’ll explain the steps for those who use BIND to provide DNS services, as that’s one of the most popular DNS servers on the Internet.

The first step is to generate a pair of keys. The first key is called the “Zone Signing Key” (ZSK) and it can be created using this command (replace “” with your domain name):

dnssec-keygen -a RSASHA1 -b 1024 -n ZONE

Next, you need to create a key called the “Key Signing Key” (KSK). It is created using the following command:

dnssec-keygen -a RSASHA1 -b 4096 -n ZONE -f KSK

Note that the -b flag indicates the number of bits of security.

These two commands will produce four files:


The format of the filenames is “Knnnn+aaa+iiiii.{key|public}”. The value “nnnn” refers to the domain name you are securing. The value “aaa” refers to the cryptographic algorithm used. In the example above, 005 refers to RSA/SHA-1 (per RFC 4034). The “iiiii” is a key identifier, which is just a 16-bit value that identifies this particular key for this particular domain.

Next, you concatenate the .key files to the end of your zone file, like this:

cat >>

Now, you have to “sign” your zone file like this. To do that, you need to identify which of the keys refers to your KSK and ZSK. If you took notice of the file names created after executing the key creation commands above, you’d know. Otherwise, just look at the file in a text editor and you’ll see which one. In our example, we’ll assume that “” is the ZSK and “” is the KSK. You’ll then execute this command:

/usr/sbin/dnssec-signzone –o -N keep -k

This will result in the creation of a file called “”.

Now, you just have to make a few small adjustments to the /etc/resolv.conf file. Here are the important changes:

options {


    dnssec-enable yes;
    dnssec-validation yes;
    dnssec-lookaside auto;

    /* Path to ISC DLV key */
    bindkeys-file "/etc/named.iscdlv.key";

    managed-keys-directory "/var/named/dynamic";




include "/etc/named.root.key";


zone "" IN {
    type master;
    file "";
    allow-update { "none"; };


Place both the and into the directory where BIND keeps its zone files. Restart named and/or issue these commands:

/usr/sbin/rndc reload
/usr/sbin/rndc flush

At this point, your DNS server is ready to go. However, your registrar must have an appropriate “glue” records in place. Usually, these records are DS records. Fortunately, those records are generated for you automatically by the “dnssec-signzone” command. You will see a file called “” with the DS records inside. All you have to do is insert those into your registrar’s DNS, much like you might assign your name servers. The procedures for doing this vary by registrar, so I cannot explain the procedure. However, it’s not so hard once you find the right place. The registrar should validate that everything is working properly before activating the DS records. One would not want an incorrect record in place, as that would break the trust chain established via DNSSEC and thereby “break” the domain resolution.

ICANN has list of registrars now supporting DNSSEC. Not all of them do and they certainly do not support DNSSEC for all TLDs. So, it is best to check with your registrar before going through all of the steps and being disappointed.

If you wish to validate that DNSSEC is working properly, you can use the “dig” command on Linux machines like this:

dig +topdown +sigchase

That command will report success or failure in the trust chain. Alternatively, visit and perform a basic test via the web.

One last point to make is that it is recommended that you re-sign your domain at least every 30 days. It's not necessary to generate new keys, but merely re-sign the zone file. (Note that if you did decide to change the key used to sign the domain that you need to ensure that you properly handle the key rollover. Otherwise, for a period of time some DNS servers might assume your domain's signature is invalid. DNSSEC Key rollover is a whole other topic.)

Permalink: Configuring DNSSEC on Your Domain

Making Sense of AT&T's New Data Plans Rates

July 18, 2012

AT&T announced today that it will be offering shared data plans called "AT&T Mobile Share" so that people with multiple devices can share data across those devices. Quite often, it's families that would benefit from sharing data, which was the case when "family plans" were introduced in order to share voice minutes.

Voice minutes and text messaging are unlimited with these news plans. This suggests that AT&T realizes that lower-cost VoIP alternatives exist, so there's no point trying to compete in the voice space. So rather than compete, AT&T will force customers to pay for the voice/text by charging a flat fee per phone, regardless of usage.

The new data sharing plan allows families to pool bandwidth as they did voice before, but the prices are not really cheaper than paying for individual plans. For example, if one has a 3-phone family plan at $70 with unlimited text and 2GB of data per phone, the total price is $70 + $30 (text) + $25 * 3 (data) = $175. Under the new pricing, the unlimited voice/text + 6GB of data would cost $195. But voice is unlimited, right? Who cares? The subscriber was probably OK with the limited voice minutes.

Anyway, here is the price breakdown:

How many people will actually save money with these new plans?

Permalink: Making Sense of AT&T's New Data Plans Rates

Acer Broke My Tablet

July 15, 2012

Less than a year ago, I bought an Acer Iconia A500 tablet. It's a great tablet and has worked really well, but in April Acer sent out the Android 4.0 update and, in so doing, broke an important feature on the tablet: screen rotation. I learned that if I reset the tablet and let it reboot a few times, it will eventually start working. There appears to be a race condition where the gyroscope is not being initialized properly.

Anyway, I waited a couple of months and contacted Acer about it. Here is what they said:

I understand that the screen of the tablet is not rotating. ... This issue is caused because the G-sensor on the tablet is not properly initializing. ... A new OS image was created to resolve this issue but there currently is no plan to release this image as a FOTA update. ... I have verified your product serial number and found that the unit is not covered under standard limited warranty. In order to resolve this issue, we can schedule the unit for repair.

Schedule it for repair? And what will they do? Install the firmware that should be released to fix the known problem?

This was my first and last Acer device. That's pretty rotten service, in my opinion. There is a world of difference between a broken device and a known defect introduced by the manufacturer through a software update!

Update: As required by Acer to fix the "broken" Acer Iconia A500 (by way of installing a new firmware load), I mailed the tablet to them. They returned it to me and, indeed, it had a new firmware load on it. Whether they had to open the tablet or not, I do not know. One thing that scared me was the service order stuffed in the box that said there were "surface scratches". I was afraid that perhaps the tablet got damaged in shipping. Alas, there were no scratches. The tablet was in perfect shape. Now, why would they have said that? I bet they say that on EVERY service order just in case somebody complains that Acer damaged their device they could say they observed surface scratches when they received the tablet. In any case, they were not entirely honest with this statement, as there are no scratches on the screen or elsewhere.

Permalink: Acer Broke My Tablet

America Forcing Its Laws on the World Sets Horrible Precedent

June 25, 2012

In case you're unaware, the United States Government seizes domain names of people and businesses all the time. They do it arguing those people are breaking the law, but take the domain names away even before there is a trial and before there is a guilty verdict. Three such domain seizures in recent months have been extremely questionable and, in my opinion, totally wrong. Worse, one guy is risk of being dragged to the United States to be thrown in jail for nothing more than links on his web site.

There was a gambling web site in Canada operating the domain name It's a Canadian company operating a business in Canada with the domain name registered in Canada. The federal government does not want you or me to gamble, so they took away the domain name by hijacking it. They did not have the authority to go to Canada to do their evil work, so they basically forced Verisign, the U.S. company that manages the .com names, to hand over the name. Along with that, the U.S. federal government indicted the man who owned the company.

The next case is a web site reportedly used to pirate movies and music called Federal law allows service providers to be exempt from what users post on the Internet, as long as they comply with the Digital Millennium Copyright Act (DMCA). This company did that, even though they are a foreign company. They are a Hong Kong-based company, with the owner/founder living in New Zealand. The U.S. worked with local authorities to raid the owner's house and take his money and property. They took away their servers and many users are complaining that they want their files back. One many even filed a lawsuit against the U.S. Government to get his files back and the U.S. argued that it would "set a bad precedent". Meanwhile, the company is closed, the 40+ employees are out of work, and there is no evidence that I can see that they were not in compliance with the law that, remember, they’re not even obligated to follow since they are not a US company. Perhaps they did thrive on exchange of illegal content, but they followed the law, it seems.

The last case if even more difficult for me to understand. A college student in the UK named Richard O'Dwyer ran a web site called On the site, users posted links to TV shows and movies around the Internet. This guy has never been to the U.S., did not do business in the U.S. (outside of the minority of users who were from the U.S.), did not have servers in the U.S., and had no copyrighted works on his web site, etc. Even so, the U.S. government is trying to force him to come to the U.S. to face trial and go to jail. Did you know that it is illegal to post a link on a web site to copyrighted works? It is not illegal in most countries, but it is here in the oppressive U.S. These kinds of laws rank right up there with taxing Americans on income they earn anywhere in the world, even if they don’t live in the U.S. or taxing people to give up their American citizenship.

The U.S. is nuts sometimes, and I don’t mind saying so. I love my country, but the politicians sometimes create laws to cater to big media companies and they stomp all over us little people. Just to put this into perspective, can you imagine facing jail time over something you say on the Internet that in your country is perfectly legal? If we follow America’s lead, then if any one of us were to say something negative about the Chinese government, for example, then we should all be picked up, carried to China, and put in jail or put to death. Sound reasonable to you? This is the real danger the U.S. is putting us all in by doing these things it is doing.

Jimmy Wales, founder of Wikipedia, is trying to stop the U.S. from bringing Richard O'Dwyer here to face trial over links on his web site. I encourage all of you to sign the petition to stop the U.S. Government. If you are American, I would also encourage you to write to your senators and congressmen to have them put an end to trying to force the world to comply with American laws. No country should ever be able to apply its laws to a person or business in another country, using a person’s words or a service they provide on the Internet as justification.

Permalink: America Forcing Its Laws on the World Sets Horrible Precedent

Delegating ENUM Resolution Resonsibility

March 29, 2012

One of the biggest challenges with respect to getting ENUM deployed is politics. Everyone wants to control the numbering plan because they either see money in owning the numbering, they do not want to be liable for service outages due to reliance on some other entity, they fear a loss of control over numbers they control, or something other. In any case, it has been very difficult to move the world to ENUM. Well, I’m here to ask the question, “Why worry about it?”

Dialed Digits is an ENUM service provider. It’s one of many ENUM service providers around the world. One can query Dialed Digits starting at the room of the ENUM tree at Dialed Digits can delegate a portion of the ENUM tree management to another organization for management, too. This can be done by simply inserting NS records into DNS like this under

4.4 IN NS

This DNS record basically says that all of the digits for the UK can be queried via (Please note that record is entirely fictitious. Packetizer does not manage the phone numbers for the UK.) So, if I were trying to contact the web team for Buckingham Palace in London at +442079304832, then a query would be sent for Seeing that +44 is handled by, a query would be directed to that server, but still looking for However, what if BT is the owner and manager of those phone numbers? It probably is. Further, I’m quite certain that BT is going to want to manage its numbers the BT domain name, perhaps at

So, how do we tell the ENUM resolution engines to go look in an entirely different domain? Here’s my proposal. We should use NAPTR records and introduce a new flag “x” that signals that responsibility for queries have been “transferred” to the specified domain (or sub-domain).

For example, under we might have a record that looks like this:

*.4.4 IN NAPTR 100 10 "x" "" ""

So, when a phone number like +442079304832 is resolved, an answer will come back with an NAPTR record that effectively says “go ask again over at”. And, so the query is re-issued under the specified domain.

What this allows service providers to do is, through DNS and ENUM procedures, to define who the authority is for a given any given digit string and to delegate management to them. There is no need to rely in a central There would be a need to establish peering relationships, but this approach would actually allow one to rely on any number of companies to provide that peering management.

Imagine if Dialed Digits were the “fallback” service for AT&T. Perhaps AT&T might manage ENUM services for all of its own numbers and might insert “transfer” NAPTR records for some numbers owned by service provides with whom the company has a direct peering relationship. But, for all other digits, it might rely on Dialed Digits to establish those peering relationships and provide the appropriate NAPTR records (either answers to queries or further “transfer” records).

UPDATE: I was exchanging email with Patrik Fältström on this topic. He suggested that, rather than introduce a new NAPTR record, just use DNAME records. That's really a far simpler way of delegating. So, an ENUM provider might have a DNS entry that looks like this:


This would mean that any query for +44 numbers would be directed to BT's ENUM tree at, if such an address existed.

Permalink: Delegating ENUM Resolution Resonsibility

Using XMPP with VoIP Protocols

March 20, 2012

As many know, I am a big advocate for enabling a plurality of devices and applications to be used together as a part of a multimedia communication session. That is the whole idea behind the work the ITU is presently doing with respect to H.325 (or AMS). AMS is aims to be the next generation multimedia communications protocol for the now aging SIP and H.323 protocols, both of which are now 16 years old. While work on AMS is still progressing, there are things we can do in the interim to make it easier to integrate some applications, perhaps most important is text and voice/video.

XMPP is the international standard for instant messaging and presence. It is widely used within enterprises around the world and used by services like Google Talk. Due to its design, it has the potential to be as ubiquitous as email is today. And like email, it fully allows for federation between different domains. With XMPP, it is just as easy to have an instant messaging (IM) session with a colleague as it is anybody around the world.

H.323 and SIP are the two leading voice and video communication standards in the market today. H.323 is still the most widely used protocol for videoconferencing, while SIP is primarily used as a voice “trunking” protocol between enterprise and service providers. In the core of the service provider networks, both H.323 and SIP are employed, with SIP perhaps now leading as the replacement as a pure voice replacement.

It is becoming increasingly possible to use VoIP (voice or video) to place calls between colleagues and with other people around the world. Since VoIP generally means “voice” in my mind, I prefer to use a more generic term of IP Multimedia Communications (IPMC), of which voice, video, instant messaging, whiteboarding, etc. are all a part. So, I’ll use IPMC below, but you can think of that as “VoIP” if you prefer that term.

When I initiate an IPMC session, it usually offers only a single mode of communication. Quite often, it is just a voice or voice/video call (admittedly, that is two modalities) or instant messaging. Rarely do we have the ability to initiate one session (e.g., voice) and have the ability to use instant messaging with that, especially if the two applications are not a single unified application. For example, if I make a call using my IP phone, my IM client has no idea that I’m talking to somebody. Likewise, if I am carrying on a few instant messaging sessions, my IP phone is oblivious to this fact.

What we need is a means of better integrating voice/video applications with XMPP. There was some work that started in the IETF to do this, but I do not think that work progressed too far. Nonetheless, I think it is important work and I figured I would write up my thoughts here.

We have two problems we need to solve:

  • My voice/video phone (desk phone or soft client) needs to know when I have an instant messaging session active with somebody so that I can just press a button to launch a voice call, and it needs to know the voice contact information for the other person
  • My instant messaging client needs to know when my voice/video phone is in an active call with somebody, and it needs to know the XMPP JID (the user’s identity) for the person with whom I am having a conversation

From these two requirements, we can see there is a need to share addressing information and there is a need to convey some presence state between the phone and the instant messaging client.

One way to convey addressing information is to simply advertise it within the protocols themselves. For example, when I configure my voice application, I could tell it my XMPP address. Likewise, when I configure my XMPP application, I can tell it the URI for my voice/video application. That’s pretty simple. You can imagine in SIP, for example, that we might introduce a header like this:


In fact, XMPP already defines the means through which addresses can be advertised for other applications.

A small addition like this to SIP and H.323 would allow me to call you, for example, and immediately know your XMPP address or your voice/video URL. One could also advertise one's H.323 or SIP URI via XMPP, too. If I have XMPP and voice/video integrated into a single application, that would be all I need to know in order to quickly launch a different mode of communication right from within my application.

Often, though, these applications are separate. So what we need is a means of allowing the voice/video application and XMPP application to convey their status information to each other. A very reasonable way to do that is to re-use XMPP. After all, XMPP was designed to be a presence protocol. It has the ability to learn and maintain state information related to various presentities (“presence entities”).

Now, with the phone knowing about active IM sessions and the XMPP client knowing about active voice/video sessions, it is now trivial to initiate new modes of communication with the touch of a button. If I call you using my phone, my IM client would know I am on a call with you. I could press a button on my IM client that corresponds to the active voice call and use instant messaging without ever having to manually enter an address.

There are also ways for clients to learn about addressing information for users automatically, too. For example, rather than tell my phone my JID, we can use technologies like Webfinger. Using Webfinger, it would be possible for my phone to query to learn the other addressing information related to me. Further, it would be possible for the person I call to learn my other addresses (IM, voice, email, etc.).

It is also possible to map telephone numbers to Webfinger account URIs using ENUM. So, it would be possible to convey only the phone number and then discover all of the other addressing information related to a user.

Webfinger makes it very easy to discover information about another person, but I realize that some people might be concerned with privacy. Therefore, Webfinger should be considered as one option and not the only solution. Still, it is one option to make provisioning significantly simpler.

ENUM could also be used to map a phone number to an XMPP address only. However, since we would still need to have the ability to map from an XMPP address to a phone number, we need to either advertise addresses via the session protocols or use Webfinger. I’m open to other recommendations.

Permalink: Using XMPP with VoIP Protocols

Amazon EC2: Creating EBS-backed Instances with Ephemeral Storage and Automatically Deleting the EBS Storage upon Termination

March 12, 2012

I use Amazon EC2 extensively. One of the things I noticed over the past couple of years is a move from instance-store to EBS-backed instances. I’ve read the literature on EBS-backed instances and, quite honestly, I don’t care about the benefits. If an instance dies, I can re-start is and have it up and running in no time, since virtually everything is scripted. That said, I’m not going to fight the trend.

One thing I do miss, though, is that instance-store instances have a large chunk of ephemeral storage available for use for free. With EBS-backed instances, the ephemeral storage is usually not available. It is, though, if you go through the motion of creating your own AMI or find one configured as outlined here.

To take the easiest route, launch an EBS-backed instance of the AMI you’d like to use with ephemeral storage. Make whatever changes you wish to it once you have it running. You might want to add this to the /etc/fstab, adjusting the device name and filesystem as required for your version of Linux:

/dev/xvda2 /mnt ext4 defaults 1 2

Now, stop the instance and take a snapshot of it. The snapshot will be our new AMI when done, so it will persist as long as you want to keep the AMI around.

Then execute the following command:

ec2-register -n AMI_Name -d AMI_Description -a PLATFORM —kernel KERNEL —ramdisk RAMDISK —root-device-name /dev/sda1 -b /dev/sda1=SNAPSHOT_NAME:10:true -b /dev/sda2=ephemeral0

Each of the variables above are defined here:

  • AMI_Name: A friendly name you assign to the AMI
  • AMI_Description: A longer description you assign to your AMI
  • PLATFORM: The platform, either "i386" or "x86_64"
  • KERNEL: The kernel ID of the kernel to use, which can be found using ec2-describe-images or observing the kernel used while the original instance is running
  • RAMDISK: The ramdisk ID to use, which should also match that specified in ec2-describe-images or observing the one used while the original instance is running (this is often not specified)
  • SNAPSHOT_NAME: The name of the snapshot you created above. Note the '10' following indicates the size of the EBS volume to create for the root filesystem and 'true' means that the volume should be deleted when the instance is terminated (you may prefer to set this to false)

One of the other things I really do not like about EBS-based instances is that when you terminate them, the EBS storage is left behind and you have to clean that up separately. Using “true” as a part of the -b parameter means that the EBS storage will be deleted automatically when the instance is terminated.

Note that the ec2-register command will return the name of your new AMI.

Permalink: Amazon EC2: Creating EBS-backed Instances with Ephemeral Storage and Automatically Deleting the EBS Storage upon Termination

Replaced WinZip

February 10, 2012

I noted at the end of December that I was pretty fed up with WinZip. That feeling has not changed. How could it after they installed a toolbar against my wishes that was almost viral in nature? How could I after I found so many things installed in the WinZip directory that were not things I purchased or was even told about during the install process (like the "registry cleaner" and such)?

I went on a search for a new tool. I looked at 7-Zip and PowerArchiver. I finally settled on PowerArchiver. Both are good tools, but PowerArchiver has a few more features that I will use.

If you're looking for a good replacement for WinZip, I think you should take a good look at both of these tools.

Permalink: Replaced WinZip

New Domains Make .com Irrelevant

January 24, 2012

As many have undoubtedly heard, ICANN has decided to open the domain name floodgates by allowing the registration of all kinds of new “top level domains” (TLDs). Rather than having just .com, .org, .net, the various country TLDs, and the very small number of newer TLDs like .mobi, people will be able to register domain names that could be anything from .cars to .planes to .crazy.

I do not know how things will turn out, but I do see a few interesting things ahead.

First, there will definitely be those abusing trademarks. I can already imagine that various names like pepsi.cola or or apple.core might be registered by individuals or companies that do not own the rights to those names. A company is certainly not going to want to manage a portfolio of hundreds or thousands of domain names just to ensure that its name is not misused. Unfortunately, this might prove to be a real challenge unless ICANN sets some strict policies that are favorable to trademark holders. (Yes, they have policies now, but I fear they will be insufficient when there are so many battlefronts.)

Why even bother with so many new TLDs? Sure, the .com namespace is very crowded. It’s extremely hard to find a .com domain name. Even so, what will be a good name in a sea of countless TLDs? If you create a web site called “acorn”, is there value in using the domain name acorn.misfire or acorn.toad? If those names are acceptable, then one would think is likewise a reasonable option, yet I see very few companies using .biz. People go out of their way to create silly .com names to avoid use of .biz. How about use of subdomains? I suspect there are domain owners that would be delighted to transform a good .com name into one that can serve as the parent domain for businesses, blogs, etc. Just imagine if owners of names like,,, etc. re-purposed their domains to serve vertical industries. Perhaps a business that fits no particular name might just pick something generic like

An interesting consequence of the expansion of the TLD namespace is that search engines will become even more important. No longer will a person be able to just remember a name like “youtube” and have some level of assurance that the site is “”. Going forward, the new GeeWow site might be anywhere. It might be GeeWow.quint. Now, who could have possibly remembered that? If people started using sub-domains, it might be With so many TLDs and the possibility of using sub-domains, one will become even more reliant on search engines to help us find information. I suspect we will actually start to see less use of the address bar on browsers and more use of search engines.

Permalink: New Domains Make .com Irrelevant

SOPA Scared Us, but the Megaupload Affair Proves the US Government Exercises Unilateral Power

January 20, 2012

In case you did not hear the news this week, the US Government took down servers owned by and worked with authorities in countries around the world to have several of the employees in the company arrested. According to Wikipedia, the company employed 155 people, yet there was no consideration given to the employees and families affected by the actions of the US Government. The Government did not give the defendants an opportunity to answer to the charges. Rather, they just shut them down.

As far as I know, I have never used Megaupload. I only learned about it through the news stories that came out this week. One article by CNN suggested that the site was used by people for all kinds of legitimate purposes. I asked my 17-year-old son if he had heard of the site, to which he positively replied saying he used it all the time. I asked him what for and he said that that’s how a lot of people distribute free mods for games. He said that to his knowledge, he had never downloaded anything from site that was not free for downloading, certainly no music or videos. Apparently, content is only accessible to people who know the URL to the content. What this means is that if the site was used for sharing illegal content, it was certainly not done on the same scale as that of organizations like the Pirate Bay where anybody can go search for content and download it. Per the CNN article, one cannot easily search the Megaupload site. To me, the fact that the site cannot be searched serves to counter any claim by the US Government that the site was primarily used for piracy.

What is really frightening is the fact that the government can so quickly and easily remove any server from the Internet. Is the law on their side? Or is the government overstretching its power? Whatever happened to the idea that a person is innocent until proven guilty? If the founders and operators of Megaupload really created the service primarily for illegitimate reasons, this would all come out in a court of law. Further, finding that the company and its operators were guilty, I would have no objection to the court then ordering that the servers be confiscated and service terminated.

Just the opposite happened here, of course. Not only did the Government shut down the service, but they did so even while the DMCA exists to protect service providers. Megaupload is registered with the Government as a service provider and claims to honor all DMCA take-down requests. I personally have a love for the DMCA, as I viewed that as going overboard in trying to fix a problem, but there is a provision in that law explicitly designed to offer protection to service providers whose users violate copyright laws. The US Government did not care to even respect that law.

What we see is the government going about this the wrong way. They “shoot first and ask questions later.” This seems to be an unfortunate trend with US law enforcement. I could cite so many examples here where I live where police officers have killed innocent people, only to offer up a shallow apology after the fact. What is wrong with America? At least they didn’t murder somebody over a supposedly stolen Platstation this time. (Yes, the police murdered an innocent young man over a stolen Playstation that was, in fact, not stolen at all. They shot him right through the front door of his apartment, killing the young boy and his dog.)

While I fully appreciate the Government’s desire to stop piracy and I support copyright holders, surely they cannot be so naïve as to believe that shuttering a site like Megaupload will address the problem. Piracy has always been an issue. When I was a kid, I remember kids would copy a cassette tape with songs from friends. They did the same thing with computer programs and such. The software industry worked to educate people about piracy. We also used technology to help reduce the rate of piracy (e.g., activation keys and registration processes). Even so, piracy exits. Honest people will be honest. Those who do not want to pay or cannot afford to pay for what they use will not pay for it, period. I will not call it “theft”, though. Unlike stealing a car or a computer, a digital “pirate” merely copies content for personal use without paying the creator of the content. It might mean the copyright holder is deprived of income, but one has to ask: would this person have paid in the first place? Most likely, they would not. After all, each person only has so much disposable income. So, 100 pirated movies, for example, do not represent loss of 100 x the price of a movie. Even so, I believe the MPAA and RIAA would like Congress to believe that is the case.

What I think will likely happen is that acts like this by the Government will push programmers to produce a technological solution to the problem. Today’s pirate networks, for example, utilize a peer-to-peer (P2P) technology. It is difficult to stop such networks, but they have weaknesses. It is possible for the RIAA, for example, to monitor such networks and to then file court cases against individuals. Sharing in a P2P network is not anonymous.

An evolutionary step for P2P technology might be to break up files into many small chunks and distribute those chunks throughout the Internet, storing them on the computers of those people who utilize the P2P network, making redundant copies, and encrypting everything so that no useful information is identifiable on any single node. In effect, nobody is sharing anything, but everybody is sharing everything. The challenge to making this successful would be in devising an approach to splitting the files into pieces and constructing a link chain that allows one to then re-assemble any content from some uniquely identifiable starting point, with every scrap of information stored only in the highly distributed P2P network. Perhaps the only means of accessing the content might be via a URI of the form dhtnet:ec116dcc-43b0-11e1-a81a-12313a003d32, where “dhtnet” refers to some Distributed Hash Table Network or other technology. That URI would not direct a browser to go to any particular web site or server, but would result in the P2P software sending queries through the P2P network and assembling the pieces of the corresponding content, following the links and re-assembling the chunks.

Whatever would the copyright holders do to fight that technology? If implemented properly, there would be nothing that could be done except to outlaw its use, which would be a challenge since I can see good legal uses for the same technology. It could be an avenue to exercise free speech by the oppressed, piracy, or even store your private and person files. Best of all, the whole system would be anonymous and files accessible only if you know the URI.

Copyright holders and the Government should not try to fight technology. Rather, educate people and take steps to improve the incomes of citizens. Doing that removes the financial barriers people face with respect to paying for content. Still, there will be some piracy. The real question is what level of piracy is acceptable? As long as a copyright holder says “none”, then they will never be satisfied.

Permalink: SOPA Scared Us, but the Megaupload Affair Proves the US Government Exercises Unilateral Power

Page 1 2 3 [4] 5 6 7

Paul E. Jones

About Me
My Blog ATOM Feed


Email Me

Social Media