Thursday, May 31, 2012

PCAST presentation on spectrum policy and technology -- attention Obama and Romney

As shown here, demand for mobile connectivity is exploding. It can not be satisfied using existing spectrum allocation policy and technology.

People have been experimenting with various forms of smart radio technology for many years, and a shift to sharing spectrum in new, dynamic, highly local ways seems inevitable.

That shift will cost incumbent spectrum owners, but pay large dividends for the economy as a whole.

The companies and nations that lead the innovation and establish the new policies and technical standards will reap large benefits. (As did the companies and nation that invented the Internet).

The President's Council of Advisors on Science and Technology (PCAST) will soon issue a report calling for the use of smart radios in sharing federal spectrum.

The report is not yet out, but you can see a 23 minute video presentation (followed by 21 minutes of good questions and anwers) on the forthcoming report with specific recommendations for policies and pilot studies. You can also download a copy of the presentation slides.
The presentation, by PCAST member and venture capitalist Mark Gorenberg, reviews the situation, makes policy recommendations and calls for pilot studies using spectrum that is currently allocated to federal agencies.

(I found one thing weird -- they advocate giving agencies budget increases as an incentive to participate. Those agencies are owned by the American public -- why should we pay them to do the right thing on our behalf?).

During the question and answer period, Gorenberg estimated that modern technology could increase wireless capacity by as much as 40,000 times and he stated that the US is in a world-wide race to lead in this technology.

One hopes this report will be taken seriously -- Obama and Romney should be pinned down on this issue.

Update, July 2012:

The report was issued:

Update, May 14, 2003:

Tom Wheeler, an Invited Expert on the report, was appointed Chairman of the FCC (

Wednesday, May 30, 2012

Kimiko Ishizaka's novel, replicable business model for an open Bach score and recording

Check out pianist Kimiko Ishizaka's recording of Bach's Goldberg Variations. The recording and digital score are in the public domain, under a Creative Commons Zero license. Feel free to listen to, download and share the music and tweek the score.

Excellent -- but how does one fund such a project? It began as a Kickstarter project that started with a $15,000 goal and raised $23,748. The funds were used to create the new score and produce a studio recording.

The recordings are now in the public domain, and Ms. Ishizaka has a Web site which lists her forthcoming concerts. No doubt her concert income will increase as a result of this project. She is also publicizing the recording by offering a free double CD to anyone who will write a thoughtful and honest review of the recording and publish it on their blog, in a music forum, on a public Facebook page, etc.

The score is also online and open. You can download it, modify it, or "play" it on the site, as shown here.

Appropriately, the score was produced using the open source MuseScore notation software from

I am not a musician -- not even a great lover of music -- but I find this project and the Musescore tools and community totally exciting!

Ms. Ishizaka is experimenting with new ways to make a living as a musician in the Internet era. (Her effort reminds me of Louis CK, who marketed a recording of his comedy concert direct to the consumer). One can imagine 1,000 scores and recordings. Kickstarter cannot provide funding for all of them, but universities, foundations and organizations like the National Endowment for the Arts and its world-wide counterparts surely could.

Tuesday, May 29, 2012

Google's storage pricing model combines the self-serving features of cell phone and ISP pricing

Here is Google's report of my storage utilization. It says I've not used any of my Picasa storage, but when I go to Picasaweb, I am told that I've used 84% of my one gigabyte. (I like the first result best, but I really have used most of my allocated gigabyte).

But, that reporting discrepancy is a minor bug. A more fundamental problem is that Google differentiates between my storing images and video (Picasa), email messages with attachments (Gmail) and arbitrary files (Google Drive).

That makes as much sense to me as differentiating between voice bits, data bits and text message bits on your cell phone bill. It's all bits.

(I am waiting for the water company to begin charging me different rates for my drinking, showering and car-washing water).

But, wait, there's more. Checking Google's storage pricing policy, we learn that pictures and videos you upload from Google Plus do not count against your Picasa limit and Google docs do not count against your Google Drive limit. Etc. (I said "etc.," because I got tired of reading the fine print).

That reminds me of Comcast not counting their own video material against download caps.

Google -- this seems a bit evil -- please unify your storage policy.

Screen sharing during on-air hangouts works, but could be better

Some friends and I do a weekly podcast called Yet Another Tech Show (YATS). We're streaming the podcasts using Google's "on-air" hangouts, and last Wednesday, we experimented with screen sharing during the podcast.

In the middle of the podcast, we talked about the simplicity of deploying servers and applications in the Amazon cloud and demonstrated a virtual server on a shared screen. The discussion went smoothly -- we could easily participate and collaborate -- but, as you see in this screen shot, the video quality was not perfect. You would not want to stream a fine print contract at this level of quality.

In the best of circumstances, real time screen sharing is difficult. A lot of data has to be moved quickly and a lot of processing is required to reconstruct and render the data as it arrives. It gets even rougher when the screens have different sizes, aspect ratios or resolutions. If I share my 1,920 by 1,200 desktop and you are viewing it in a 400 by 400 window, we have a problem.

After our podcast, I played around a bit more with screen sharing. I started an on-air hangout between two computers sharing the same Internet connection -- my laptop and my wife's iMac.
Both machines have 1,920 by 1,200 pixel displays. My laptop has a 3.06 Ghz dual core CPU, 4GB memory and is running 64-bit Windows and the iMac has a 2.8 Ghz core 2 CPU with 2GB memory and is runnng OS X. Google hangouts was the only application running on either machine.

First I shared the Mac's properties screen. It showed up quickly on the laptop, but as you see here, it was blurry. The image gradually sharpened until after around three seconds, it was easily readable, though, as you see, still imperfect. While the screen was easily legible, the rendering delay may have hindered a conversation.
Next I made Word documents with screens full of words on both machines and shared both screens. Again, rendering and "focusing" the pages took around 3 seconds on either machine. Character quality on the laptop was better than that on the Mac (shown here).

Cursor movement on a remote screen was jerky, but it was less than a second behind. The delay in selecting a single word then deleting it was well under a second whether working on the Mac or PC. The delay in deleting a paragraph was more noticable -- about a second.

The delays were caused by some combination of the speed of the computers and communication time. The CPU utilization on both machines varied significantly while screen sharing, even if there was no change on the screens. (At times it was over 90 percent on the laptop). When other applications were running, performance deteriorated noticeably.

While imperfect, hangouts on air was good enough for our demo and conversation to run smoothly. This is version 1 and Google will improve their sharing and rendering algorithms -- version 2 will be better. Communication link speed is controlled by business interests, not technology, so it will be a more persistent constraint in the US.

The video of the YATS session is shown below -- the screen sharing segment starts at just after the 30 minute mark.

Tuesday, May 22, 2012

Is college's stone age is about to end?

Mark C. Taylor, chairman of the department of religion at Columbia University, asks whether college's stone age is about to end in a three part article on the university in the Internet era.

The three parts are at:
Here are a few quotes and paraphrases:
  • We produce too many unemployable PhDs.
  • Is one paying for education or certification when attending college?
  • Some subjects can be outsourced; for example, let one college have a strong French department and another a strong German department.
  • Online education will be modular.
  • There will be online winners and losers.
  • Financial pressure and improved technology have reached the point where much education will soon go online.
  • The networking of higher education will transform how teachers teach and what students learn.
  • Disciplines will need to be reconfigured -- departments can be transformed or abolished.
  • Faculty will increasingly serve as academic counselors who advise students on designing classes and integrating programs at different institutions.
  • Excessive competition and overspecialization are the plagues of higher education.

Monday, May 21, 2012

Developing and deploying applications on the Internet is getting easier

There have been several major changes in the way we develop and deliver IT applications. We began with batch processing in the 1950s and progressed to time sharing, personal computers and now the Internet.  With each new platform, application development has become easier.

To develop an application in the days of batch processing, you had to be a nerd -- a professional programmer.  We used to keypunch assembly language programs then hand the card decks to operators who fed them to the computer in batches.  It typically took a couple of hours to get your results back.  Timesharing shortened the turnaround time, but professional programmers were still needed, and applications took months or years to build.

The personal computer enabled users to develop applications like newsletters, simple accounting systems, small databases and so forth using productivity software.  Today one can develop a complex application like a blog, database, wiki or social network on the Internet with little effort.

But, what if you want to run your own applications on your own server? That is also getting easier and cheaper.

In the early days, you needed a computer to run the service and a connection to the Internet.  It might have been a personal computer in your bedroom or on a shelf in a server room.

If the load grew, you could afford denser server blades in racks, but you were still responsible for maintenance and connectivity.

You could take care of the connectivity by moving your server into a data center, but it was still your server.

As personal computer power increased, we were able to borrow a page out of the mainframe book and partition a single physical server into several virtual servers.  Then Amazon and others took it one step further -- taking care of scaling and connectivity by offering virtual servers in their data centers, but it still took a nerd to configure and manage them.

Today, companies like Bitnami are raising the abstraction level and lowering the nerd bar, making it possible to deploy a server with installed applications in just a few minutes.

To demonstrate the ease of deploying applications, I created a virtual machine on the Amazon cloud and installed Web, wiki and blog servers.  You can visit the server and check the three applications here.  Go there and you will see three fully operational applications.

I am not a system administrator or network engineer, but I was able to create the virtual server in the Amazon cloud and install and deploy the three applications in about ten minutes using Bitnami. (You can see the step-by-step installation here).

Bitnami and others like it are raising the abstraction level. Soon we may be able to describe a virtual machine – its speed, memory and storage – and deploy it and its applications using a form like the one shown here.

In addition to specifying the server and its applications, this hypothetical form allows one to select cloud vendors.  Today, Bitnami is tied to Amazon’s cloud, but one can easily imagine them offering a choice of cloud vendors.

If we were to dynamically allocate the resources needed to run an application -- changing them automatically when some performance thresholds was crossed -- one could just pick a vendor, select the applications to deploy and click submit.

When that happens, your grandmother can be her own system administrator.

The online education market is global

There has been a lot of talk about and investment in open online classes from elite US universities like MIT, Harvard and Stanford, but let's not lose sight of the fact that online education is a global market -- on both the supply and demand sides.

Excellent universities in other nations than the US are offering classes, certificates and degrees online. There are Big Names like the Inidan Institutes of Technology, Cambridge, and Oxford and not-so-big names like the University of Namibia. Universities big and small in every language group are thinking about distance education today -- we can look forward to a lot of competition and choice.

The student population is also global. Stanford's AI course had students from 190 countries.  The class was also free, and the most exciting promise of open online education is that it can reach the disenfranchised.

One is reminded of the story of the young mathematician Srinivasa Ramanujan rising to fame after writing Professor G.H. Hardy at Cambridge from his village in Southern India. (His first two letters to Hardy are said to have been returned unopened). Tomorrow's Ramanujan will have a much easier time getting the attention of his tutors. How many Ramanujans will we find enrolled online and what will be their contribution to humanity?

I can't leave this post without pointing out the irony that Springer publishes a math journal named after Ramanujan. The print version of Ramanujan is $719 per year plus $67.50 shipping and handling and the electronic version is $590. Ramanujan could not have afforded it -- but the disruption of academic publishing is a different post.

Friday, May 18, 2012

Google goes beyond text search with their Knowledge Graph

Soon after he created the World Wide Web, Tim Berners Lee turned his attention to the semantic Web -- a Web of data rather than documents. Google is now rolling out their first step in that direction, the Knowledge Graph.

Google's 2010 purchase of Freebase and Metaweb, the system used to create it, was a key step toward Knowledge Graph. Freebase is a semantic database, which knows the attributes of entities and the relationships between them. For example, Freebase knows that Larry Press is a person and the value of his city of birth attribute is Pasadena, California (not Pasadena Texas).

Google started with the Freebase concept and added data to create the Knowledge Graph database, which now contains 500 million entities with 3.5 billion attributes and connections.

Let's look at an example. I started with a vanity search for myself, and the following profile was displayed on the right hand side of the screen:

Note that it did not know the value of any of my attributes, it just returned a link to my Google Plus profile and the first few sentences of my most recent posts. I guess I am not one of the 500 million entities included in Google's Knowledge Graph.

Next I searched for George Washington, who is a bit better known than me, and is included among Google's 500 million entities.

In this case, it knows his nicknames, date of birth, etc. Since he is not just a person, but a president, he also has a vice president attribute.  It also knows that he died at Mount Vernon, which is another entity that is included in Google's Knowledge Graph:

While the Knowledge Graph was developed using the Freebase tools, Google did not import the user-contributed Freebase data. (I am in Freebase, but not in the Knowledge Graph). That says Google is abandoning the Wikipedia-like openess of Freebase, in which users could add entities and change the values of their attributes, for a database that is currated in house. That will limit its growth and its "Internetness."

This is an interesting announcement, but Google is not the only player in the Web of data game.

Apple has attracted a lot of attention with Siri, a speech-driven application that answers questions by querying Wolframalpha, another semantic database system. Knowledge Graph gives Google an answer to Siri and Wolframalpha. (Wolframalpha goes further, incorporating a powerful symbolic math engine).

Microsoft is also working on the semanticly rich Web of data. They characterize Bing as an "answer engine" rather than a "search engine," and Microsoft Research has a Semantic Computing Intitative. Microsoft will no doubt incorporate their work into Bing.

The Web is getting smarter -- we may move from today's Web of documents to a Web of data and eventually a Web of knowledge (an ill-defined wannabe buzz word I've heard).  It makes you wonder what it will be like in fifty years.