Saturday, September 28, 2013

User interface design -- fad or function?

Microsoft started with the flat Metro interface of Windows 8. Apple flattened iOS 7 and now Google has flattened their logo.

Is this based on human-computer interface research or is it style and fad? Years ago, I taught an HCI course in which we read about controlled studies in making user interface design choices. Google is said to be engineering driven, researching everything from the optimal number of seats at cafeteria tables to the number of pixels devoted to a particular icon on the screen.

But, I have the feeling that the rush to simple, flat user interfaces is driven by fad as much as function. My wife uses an iPhone and an iPad. She is probably a typical non-geek user -- using relatively few apps and features. Our daughter was all excited when iOS 7 came out -- she grabbed my wife's iPad and iPhone and installed it the next day.

After using iOS 7 for a few days my wife says she sees no advantage for her usage mix -- the only change is that she has to learn how to do what she always did in slightly different ways. She also likes 3-D buttons that are obviously clickable and give feedback when you click on them.

My wife thinks it is fad, not function and it reminds me of the Pantone's annual fashion color palette and their color of the year:

If these changes were driven by functional considerations, I'd love to see the old-fashioned HCI research that led to them. If not, fads tend to be cyclical, so we can look forward to a return to 3-D skeuomorphism in a few years. As you see below, the designers are already working on them.

(You can make your own goofy logos at


Update 9/29/2013

This child was even more upset than my wife with iOS 7:

Friday, September 27, 2013

Purdue Signals -- giving students feedback on how they are doing

I give my students feedback as to how they are doing relative to the class on assignments and quizzes throughout the semester. I also do informal, anonymous surveys to give them feedback on their relative level of effort, for example asking how many many actually watched a video that had been assigned or how long they spent studying a module or doing an assignment. These low-tech surveys take only a few minutes to administer and tabulate. My goal is to put information, and therefore responsibility for the outcome, in their hands.

Purdue University has a more ambitious effort for monitoring student progress and giving them feedback during the term. Since 2007, they have used Signals, a system that mines student data to predict their success in a course. They look at demographic, help-seeking and performance variables to judge how well a student is doing as the term progresses. Student feedback includes a red/green/yellow signal to indicate their overall progress as well as "canned" emails from their instructors and suggestions as to how to improve and where to get help.

Purdue reports that graduation rate, retention rate and grades have improved for students taking Signals courses. For example, Signals students got better grades in these courses:

This and the other results reported by the Signals team are encouraging, but there are many confounding variables. For example, professors choose whether or not to use Signals in their courses and those who choose to do so may be more more committed to teaching than those who don't.

If this sounds interesting, you can visit the Signals home page, read this short report on the project, read this discussion of ethical considerations in mining student data in this manner or watch these one minute videos.

Explanation of Signals for students:

Comments from two faculty members:

I am not sure whether I would use Signals if it were available to me as a teacher, but my low-tech approach to feedback on relative performance and effort is simple and I can do it on my own.


Update 9/28/2013

Andrew Stewart suggest that we look at JISC's work in this area, writing:
Some interested findings emerging from Jisc's Assessment and Feedback programme around this area. As a distance learning student myself I'd love to have this kind of information at my fingertips.

Thursday, September 26, 2013

Chromecast -- the $35 decision support room in your office

In an earlier post, I discovered that my old laptop was too slow to cast video to my TV set, but it worked well when displaying still images like a PowerPoint presentation on a TV set equipped with Google Chromecast.

It not only works for a presentation by one person to an audience -- several people in a room can actively share a heads-up display. Augmented meeting rooms in which participants have connected computers at their fingertips, were invented by Doug Engelbart in the 1960s envisioned and invented much of what we use today during the 1960s.

In the 1980s, local area networks became common, and the University of Arizona, Xerox Palo Alto Research center and others began experimenting with augmented meeting rooms with shared displays. Meeting participants could brainstorm ideas, rearrange document outlines, edit documents, vote and conduct polls, etc. and companies like Groupsystems marketed upscale decision support rooms like this one.

Those rooms cost a fortune and were used as corporate boardrooms, but interest in them has waned.

A large TV set with a Chromecast dongle is not as powerful or opulent as a decision support room, but for $35 it might become the decision support room for (small groups of) the rest of us. Below you see a simple test in which two users are editing the same Google Drive document with the result displayed on a TV set.

This sort of setup would allow several people in a room to share a heads-up display. I tested it with a simple Google doc, but one can imagine using it with the kinds of software found in expensive decision support rooms -- software with modules for voting, brainstorming, outlining, writing, etc. and features like selective anonymity and podium passing. We may find Chromecast displays next to those whiteboards in our offices in the future.

Update 10/2/2013

Commenter Martyn Williams noted that the Chromecast will work with an HDMI-equipped projector, which would enable larger displays in your office or conference room.

Commenter Roger Jennings suggested that the Chromecast could save bandwith in casting pages if it had more "intelligence" -- for example being able to display PDF, Excel, Powerpoint, and Word files.

Together, these comments suggest a future where we have dumb displays with upgradeable, external intelligence. I have changed the TV set in my den 3 times in the last 35 years, but change computers every couple of years. I don't want a smart TV set, I want a dumb display with an upgradeable "chromecast."

Monday, September 23, 2013

Globalization of MOOCs: Futurelearn announces first courses

When you think of MOOCs, edX, Coursera and Udacity come to mind, but global online education is taking off.

Futurelearn, a coalition of 20 UK and international universities, the British Museum, British Library and British Council, has just launched with 20 courses starting this fall and winter. The courses are 6 or 7 weeks long and require 2 or 3 hours per week.

I checked out their "beta" Web site and few things caught my attention.

Their slogan is "Learning for life," indicating a focus on students who are not seeking credit and degrees. That audience may turn out to be more important than traditional university students -- more lucrative and more beneficial to society.

They also show interest in training for job-related skills. One of the initial courses is Dental photography in practice.

Their tagline is Enjoy free online courses from leading UK and international universities, indicating a global focus. In addition to international universities, they will be serving international students. One of their partners is The British Council, the UK international cultural organization, which offers classes (online and off) and arranges cultural and educational exchanges and events. The British Council has offices in 116 nations and they will no doubt help with marketing and spreading the word.

Futurelearn is later to the game than the big three U. S. MOOC providers, but the game is just beginning -- the technology, pedagogy and place in society of online education are all changing rapidly. Furthermore, FutureLearn is a private company wholly owned by the Open University, which has been doing distance education (online and off) since 1971.

I've enrolled in a course and am anxious to see their platform and pedagogy. Stay tuned.

Beware of the Nexus 7 and the Hush-a-Phone -- they may damage your network

I heard a rant by Jeff Jarvis on the This Week in Google podcast. It seems that he got a new Nexus 7 tablet and Verizon refused to add it to his LTE account because it had not yet been verified. He tested it with a SIM from a different device and it worked fine. He also pointed out that Google had advertised that it would work on the Verizon network and that the terms of Verizon's FCC license required open access to any compliant device.

(He has documented the story in this blog post).

Verizon said they had to certify the device -- have it tested to be sure it would not harm their network.

That reminded me of the Hush-a-Phone. In 1956, the courts overruled an FCC ban on Hush-a-Phone, rejecting AT&T's claim that it posed a risk to the network and would degrade call quality.

Here is a picture of the Hush-a-Phone -- you can decide how grave the risk was:

What if AT&T had prevailed in the Hush-a-Phone case and the subsequent case of the Carterphone, a device for patching radio calls into the telephone network? (Yeah, hams used to do that).

It seems that Verizon is unclear on the meaning of "open" -- they are still nostalgic about the good old days, when only the phone company could sell you things like phones, modems, DSL routers, answering machines, etc.

Friday, September 20, 2013

If you build it, they will come

A lot of folks are saying that phones are so fast these days that the 64-bit processor in the iPhone 5s does not change the user experience -- it is nothing more than a marketing gimmick.


This reminds me of the time I was consulting to MicroPro International, publisher of WordStar -- the most popular word processing program of its time. I was fired as a consultant after telling them that the second-generation word processor they were developing, WordStar 2000, was a loser because it was not graphically oriented. They ignored me because, hey, a graphically oriented word processor would require 64 MB of memory and 16 MB was a lot of RAM at the time.

In a Moore's Law world, you design for the future, not the present. Apple's 64-bit CPU will not make placing phone calls or texting any faster, but it provides a new platform on which to build new applications. For example, we will see the substitution of computation for hardware in making better videos and photographs -- the 41 megapixel camera in the Lumia 1020 will not seem so amazing in a few years. Improved voice recognition and synthesis will demand more horsepower. Wearable things and physiological monitoring applications will be developed to use all the processing power we can muster.

And, how about something mundane like PC replacement?

My main computer is a Dell Precision M4400 laptop. It has a dual core 64 bit processor with 410 million transistors that is clocked at 3.06 GHz. It has 8GB of memory and a 256 GB solid state drive.

I do 99% of my work on that machine, and unless I am streaming or rendering video, it seems quite responsive. If the CPU were ten times as fast, I don't think I would notice much increase in my productivity.

The iPhone 5s has has a dual core 64 bit processor with "over 1 billion" transistors that is clocked at 1.3GHz. It has 1 GB of RAM and up to 64GB of storage. It also has a motion sensing coprocessor.

Anandtech reports that the new iPhone is substantially faster than its quad core, faster-clocked cell phone competitors -- Apple is using those billion transistors well.

The iPhone is faster than other phones, but how would it compare to my laptop if it were plugged into a keyboard/monitor docking station?

Both have dual core, 64 bit CPUs and judging by the transistor counts and cell phone benchmarks, the iPhone processor should be able to beat my laptop. But, what about the low clock speed and relatively small memory? And two cores sounds kind of lame these days.

Apple has kept clock speed, memory capacity and core count low to save power, but, when docked, power is a minor consideration. I bet Apple's clever engineers could design a dual mode machine, that slowed and saved power when not docked. (Cooling would be a problem when docked and running fast).

Canonical is taking a shot at cell phone docking with Ubunto for Android. It is vaporware for now, but they have a cute video to illustrate the concept:

Apple traditionally does a major iPhone upgrade every two years. This was the year for a minor upgrade, but it laid the foundation for the future. I do not know what applications will be developed to utilize that 64 bit address space and processing power, but ... build it and they will come. ----- Update 9/21/2013 Last week, I asked my class whether they thought the finger print reader on the new iPhone 5s was a big deal. They did not think so -- they said they would be willing to pay from 0 to about $10 for the added convenience. Then I asked them about using it as general ID for authentication and for purchases in stores and online. That perked their interest up. But, will that happen? Brian Roemmele thinks it will and builds a strong case in his post What is Apple’s new Secure Enclave and why is it important?. It turns out that some of those billion plus transistors in the the A7 CPU are devoted to the implementation of patent protected security features.
If Roemmele is right, we will see a slew of authentication and transaction-oriented applications for the iPhone 5s and future devices using this technology. Here is the conclusion of his post:
Apple has taken a very slow and methodical approach with the release of Touch ID. We can see that there was a tremendous amount of amazing work that has gone into this project. All of this convergence took over seven years of very hard work. It includes many patent applications, the acquisition of AuthenTec, the selection of the A7 processor and the integration of the TrustZone suite all baked together into what we now know as Touch ID. This has been a long journey that has only just been made public and I am rather certain that Steve Jobs would be quite proud.

Update 10/22/2013

The new iPad and iPad mini are out and Gigaom's first look singles out the speed of 64-bit apps and predicts that other developers will follow suit. Here is a quote:

The native Apple apps open super quick and there’s no lag when scrolling or paging through content in Pages or iPhoto. You tap and the device responds. Obviously Moore’s Law is at work here, but it helps that Apple has rewritten its native apps for 64-bit compatibility to be fully optimized with the A7 chip. Developers will be doing the same over the next year, so the iPad Air is likely only going to get better until the next model arrives.
Apple built it and they are coming.

Thursday, September 19, 2013

Thirty eight percent in the US watch Netflix online -- each one knows how to "cut the cord"

Yesterday I posted a note on Netflix' vision for their company and the TV industry in general. They now see themselves as a "movie and TV series network" and predict rapid growth for Internet TV. Today, I came across a Nielsen survey that supports both of those contentions.

The survey showed that 38% of the people in the U. S. subscribe to or watch Netflix streaming video service. That is up from 31% last year.

I don't know about you, but that is a lot more than I would have guessed. It is about 119 million people if they consider the entire population -- babies and all. Note also that Hulu and Amazon also have significant, growing numbers of subscribers.

Netflix' view of themselves as a series network is also confirmed by the study. Forty five percent of Netflix streaming subscribers say the types of shows they watch when they stream are original programming -- series like "House of Cards."

And, when they watch those series, they tend to “binge.” Eighty eight percent of Netflix users and 70 percent of Hulu Plus users report streaming three or more episodes of the same TV show in one day. As we pointed out in our previous post, both consumers and creators like the full-season format of Netflix productions.

The survey also showed, that Netflix and Hulu are watched on a variety of devices:

The above figure also suggests a trend away from computers and game machines toward phones and tablets. People want to watch TV on any device at any time and at any place.

"Over the top" Internet television is not just for geeks any more -- 38 million people understand how easy it is to defect from cable and satellite TV, to "cut the cord." As the quality and variety of Internet TV material improves, it will be easy for them to drop their cable and satellite subscriptions. When we reach the tipping point, the transition will be rapid.

Wednesday, September 18, 2013

Netflix' vision of their company and the future of television

Check out the Netflix Long Term View document for investors. It is interesting because it reveals their vision of the company and for the future of television in general.

As to the vision of the company, they say they are a "movie and TV series network." Note that they see themselves as a "network" -- will Netflix, YouTube, Amazon and the BBC become the new television networks?

But, they are not just a network, they are a "TV series network." That is a testament to their success producing multi-episode series for Internet distribution. People like to watch TV without commercials. They also like watching two, three or maybe all the episodes in one sitting. They like watching TV on phones, tablets, PCs or television sets whenever they want to.

Creators also like the artistic freedom and financial security of the Netflix format. They have the freedom to craft a long story. We can think of the Netflix series "House of Cards" as a 661 minute story, to be watched in one or several sittings. The 13 episodes vary from 46 to 56 minutes in length -- the writers are not constrained by time slots and commercial breaks. They could have created more or fewer episodes if they that was the best way to tell the story.

For more on a creator's view of Netflix, watch Kevin Spacey's excellent 2013 James MacTaggart Memorial Lecture 2013 (below). Spacey, who produced and starred in "House of Cards," says he pitched it to the TV networks and cable companies, but he went with Netflix because they gave him total creative control, funded 26 episodes up front and did not request a pilot episode. He feels that pilots are an expensive, misleading digression. He is producing very long stories with evolving characters, and trying to cram that into a single TV pilot or a two hour movie is impossible.

How about the Netflix vision of the future of television?

They sum it up saying that "While Internet TV is only a small percent of video viewing today, it will keep growing because:"
  • The Internet is getting faster, more reliable and more available;
  • Smart TV sales are increasing and eventually every TV will have Wifi and apps;
  • Smart TV adapters are getting better and cheaper;
  • Tablet and smartphone viewing is increasing;
  • Internet TV apps are improving through competition and frequent updates;
  • Streaming will be the leading source for 4k/UHD video;
  • Internet video advertising is becoming more personalized and relevant;
  • TV Everywhere provides a smooth economic transition for existing networks;
  • New entrants like Netflix are innovating rapidly and driving improvements.
Do you like "binging" on a commercial free TV series? (I just finished "Orange is the New Black"). Do you agree with Netflix' reasons for the future growth of IPTV? Are they enough to overcome the power of the incumbents?

Check out Kevin Spacey's lecture:

Reflection by entrepreneur Tim O'Reilly

This post is a bit unusual for me, but it fits because the Internet is the product of business and entrepreneurship as much or more than it is of research and education.

O'Reilly Media is one of earliest Internet companies. O'Reilly started as a technical writer then began publishing books on the Unix operating system. By the time the Web appeared, the company was publishing Internet and programming books and they built the first commercial Web site, which was later sold to AOL. Today O'Reilly Media is a profitable group of enterprises with nearly 500 employees and collective revenues approaching $200 million.

Founder Tim O'Reilly has written a longish blog post in which he recounts the history of the company and talks about what he did right and what he did wrong. He modestly features six key mistakes and the lessons he learned from them, but there is a lot on what he did right as well.

Recommended reading for those interested in tech business and management.

Tuesday, September 17, 2013


If you do not agree that we are in the midst of an explosion in educational innovation, check this blog post by Daniel Hickey.

We all know that MOOC is a massive open online course, but do you know what a BOOC is? A DOCC? A SPOC?

The post contains links to examples of links to each. (Well, no links to SPOCs).


Update 9/17/2013

+Mark Vickers added OLAs to the acronym list, but OLAs -- OnLine Activities -- sound a lot like learning modules, which I and many others (most successfully the Khan Academy) have used for years.

Saturday, September 14, 2013

Experience casting tabs

The other day, I posted a note on my experience with the Chromecast radio. Today, I will talk about experience with video.

My Chromecast works well when showing video from a cast-enabled site like YouTube or casting a Chrome browser tab with still images in it, but poorly when casting a tab with streaming video. (Note that "to cast" is becoming a word, like "to google" or "podcast").

The first thing I tried was casting a tab in which I was surfing the Web and stepping through a Google Docs presentation. The TV image was shifted a little bit to the left, cutting off a few pixels; there was a noticeable, but sub-second, delay in updating the TV screen and data compression made small text difficult to read. The performance was imperfect, but generally satisfactory. (PDF documents do not work because they are handled outside the browser).

As you see below, the CPU load was seldom over 50% of capacity and usually less than 30%, but that was enough to turn the fan on. (My laptop was plugged in).

Next, I went to the CommonCraft Web site and watched a video that played in a 579 by 294 pixel widow. I picked Commoncraft because they make simple teaching videos with little movement on the screen. As you see below, the CPU load increased, but I was able to watch an entire video, which started at the arrow point:

For my final test, I watched a video of the Daily Show in a 512 by 288 pixel window. This is faster-changing video than Commoncraft's. At first, the video played satisfactorily, but CPU use quickly moved up to the vicinity of 100% and the the audio and video degraded severely. The audio went before the video, but the program was un-watchable.

It should be noted that I ran these test using a three-year old Dell Precision M4400 laptop with a 3.06 Ghz core duo CPU with 8 GB of memory running 64-bit Windows, connected to an old 802.11a/g Wifi network. The results would have been better with a newer, faster laptop or an 802.11n network.

The bottom line is that, given my computer and network, tab-casting is fine for PowerPoint presentations and Web surfing, but it is not up to streaming video in a window, much less streaming full-screen video. That is the bad news. The good news is that this is Chromecast version 1.0. Redesign and Moore's will take care of the performance problems soon enough.

For more on variables that can affect tab-casting quality, see this Google help page.


Update 9/16/2013

+Zarthan South pointed out that my laptop lacks Google's minimum CPU for casting video tabs. With Windows 7, one should have a Core i3 or equivalent for standard quality video casting ( My laptop is fast enough to cast stills like Web pages and slide presentations, which is quite useful. Note also that RAM utilization was around 3GB while running these tests.

Update 10/22/2013

K. J. Kim has found that switching to the developer channel a Chromebook significantly improved video tab casting.  That shows that Google is continuing to improve the Chromebook.

Wednesday, September 11, 2013

I love my Chromecast, but the radio is kind of lame.

I ordered a Chromecast right after they came out, but Amazon back ordered mine. It came a few days ago.

I plugged it in to my TV set, ran the setup and discovered that it could not connect to my home WiFi network. After fooling around with the base station position and antenna and using Google's short HDMI extender cable to move the Chromecast to the side a bit and change its orientation, I eventually established a weak (2 bar) connection.

I used Metageek's inSSIDer software running on my laptop to see the signal strength on the clearest 2.4 Ghz channel available in my TV room and, as you see here, it varied from -50 to -60db.

That is not a terrific signal, but I have a Roku box plugged into the same TV set and neither it or my laptop or my wife's iPad have difficulty connecting to the WiFi network from that room.

I looked online for radio sensitivity specs, and discovered that the folks at Tom's Hardware have speculated that the Chromecast uses an AzureWave AW-NH387 with 802.11 b/g/n, Bluetooth and FM radios. (If that is the case, I wonder if Google has plans for the Bluetooth and FM radios).

They came to that conclusion based on this photograph from Google's FCC Chromecast application:

(If you want to have some geek fun and see how they tested the device, check Google's report to the FCC).

Google's tests show that the Chromecast and its power supply satisfy FCC noise requirements, but I am still in the dark on the sensitivity of the radio.

Is there some way to get finer grained feedback on connection strength than the ill-defined "number of bars?" It would be nice if there were better specs so we could compare devices and guess ahead of time whether they would work in our locations. For now, I can only conclude that the radios in my laptop, my wife's iPad and our Roku box are better than the Chromecast radio. (By the way, a couple years ago, I found that the radio in my wife's iPad was weaker than my laptop radio).

Lest I end on a sour note, you should know that I love the Chromecast and expect to find Chromecasts connected to and built into many future TVs and boxes like my Roku. Chromecast radios will improve with time and I will upgrade my lame WiFi network to 802.11n. I won't be surprised if one day we find all sorts of "Chromecasts" plugged into formerly dumb, disconnected things like heaters, air conditioners, washing machines, etc.


Update 9/12/2013

Google decided to place the Chromecast antenna inside the dongle case, which means it has to zig around chips and is constrained by the size of the case, so your WiFi reception will depend upon the location and orientation of the dongle itself. Luckily, I was able to get my Chromecast to connect to my home network by fiddling with its orientation and getting it away from the TV set using Google's short HDMI extender, but I was flying blind.

Anyone who has adjusted the rabbit ears on an analog TV set knows that reception varies with the orientation and position of the antenna -- you see the picture improve or get worse as you move the antenna. Unfortunately, the Chromecast gives you no feedback as you move it around.

If you succeed in connecting, Google displays the "strength" of the connection using "bars" like on your cell phone. But those bars are a course, undefined measure of signal strength and they are not shown while you are adjusting the position of the Chromecast.

Google should let you monitor connection strength as you move the Chromecast around. The familiar bars would not be good enough -- you need something more precise like, say, a two digit number or a graphic meter of some sort. The user would not have to know what the meter was displaying, just that it increased or decreased with signal strength.

The dongle would need to be mounted on a swivel that mechanically constrained it once you found the optimal orientation.

Another alternative would be to use an external antenna something like this:

It seems that Google tried to emulate Apple with the Chromecast. It comes in a really nice box and all the user has to do is plug and play -- let's not bother them with details like signal strength and adjusting the antenna position. That is great when it works, frustrating when it doesn't.

Google and edX combine their strengths to form

In previous posts, I've said universities and university systems (like mine) could host open source MOOC platforms from Google and MIT edX, allowing faculty and others to experiment with innovative educational technology -- to develop focused instructional modules to supplement their own courses or complete MOOCs.

Better yet, I've suggested that Google could offer a hosted service where individuals could do the same without support of their university -- a "YouTube" for MOOCs and modular teaching material.

It looks like we are moving in the direction of the second suggestion. Google and edX announce today that they will be collaborating on

The service is slated to be available in mid 2014, and it sounds as though a lot of the details (including revenue sources) are yet to be decided, but open software, open data and collaboration are clear values of this non-profit entity.

You can read more in a Google blog post and an MIT press release. Here are a couple of quotes from each along with some parenthesized comments:
Google blog post

We support the development of a diverse education ecosystem, as learning expands in the online world. Part of that means that educational institutions should easily be able to bring their content online and manage their relationships with their students. (I hope they support individual teachers, students and non-academics who want to develop teaching material in addition to supporting educational institutions).

Today, Google will begin working with edX as a contributor to the open source platform, Open edX. (It sounds as though the edX and Google platforms will be combined. Some time ago, Stanford also rolled their MOOC effort into edX. Perhaps competition from Udacity and Coursera has been a factor in driving this consolidation.)

edX press release:

In collaboration with Google, edX will build out and operate, a new site for non-xConsortium universities, institutions, businesses, governments and teachers to build and host their courses for a global audience. (This sounds like "EdX for the rest of us" -- those who cannot afford edX fees and are not at elite universities).

Google shares our mission to improve learning both on-campus and online. Working with Google's world-class engineers and technology will enable us to advance online, on-campus and blended learning experiences faster and more effectively than ever before ... This new site for online learning will provide a platform for colleges, universities, businesses and individuals around the world to produce high-quality online and blended courses. will be built on Google infrastructure. (It sounds like Google is bringing their Hangout, Live Stream, YouTube, Plus, etc. infrastructure to the party).

The devil is, no doubt, in the details, but this combination of MIT's educational expertise and reputation, Google's vast infrastructure and the lofty goals of both organizations might turn out to be revolutionary.


Update 9/18/2013 announced that they would be offering certified multi-course sequences as well as single courses. Do you think that this sort of thing might become more important than a traditional college degree in the job market? If so, for what sorts of jobs?

While I am glad to see them move in this direction, I also hope they make room for creating and discovering sub-course modules on focused topics. (I am a long-time modular teaching material nut).

Update 11/4/2013

Stanford, which had committed to the edX platform some time ago, has announced additional support.
Uppdate 12/1/2013

There is a discussion of this post on Slashdot.

Update 12/16/2013

EdX drops plans to match students with potential employers. In a failed trial -- they tried to match 868 high-performing students with job openings and none got a job -- they ran into competition from traditional headhunters and codified hiring criteria in HR departments. They are considering other revenue generating options like licensing courses to universities and other types of organization (sounds a bit like the shift toward vocational training at Udacity) and hosting and supporting their open source course delivery platform. If they were to pursue the latter option, how would that impact their collaboration with Google at

Monday, September 09, 2013

Reflections on teaching a freshman composition MOOC at Georgia Tech

Karen Head has written a series of blog posts on a Freshman Composition MOOC she and her Georgia Tech colleagues (a team of 19) taught using the Coursera platform with support from the Gates Foundation. (The course name, composition, reflects the legacy of a course catalog -- this was a course on written, visual and oral communication).

Anyone thinking of teaching a MOOC (in any subject area) should read these columns. They are an open minded presentation -- the good and the bad. One is left with the feeling that today's Coursera platform is not up to the task of teaching a MOOC on a topic that requires substantive, subjective feedback on student work. That's the bad news. The good news is that we are at the start of a period of innovation and Dr Head and her colleagues learned a lot from the experience that will improve their classroom teaching. She says she is glad she engaged in the process, stating that "It is important, I think, to be part of things rather than only yelling from the sidelines (no matter which side you support)."

The following are quotes from the blog posts, with a few parenthetical remarks I added. (I hope I found all the posts).

Here a MOOC, There a MOOC: But Will It Work for Freshman Composition?
January 24, 2013

I am no Luddite. However, I will admit to some reservations about whether a MOOC is the ideal platform for teaching writing. I have argued passionately for keeping composition classes small. Ultimately, I decided to pilot this MOOC because I am open to the possibilities, but I prefer to discover firsthand whether it works.

A representative from Coursera (the platform we must use) contacted recipients of the Gates MOOC grants asking all the recipients to form a collaborative led by a Coursera representative to discuss course design. While the explicit message was one of helpfulness, the implicit message felt intrusive and seemed more about Coursera’s desire to ensure a certain continuity of experience for its users. Since Coursera is a business, I can understand its desire for such consistency. However, ours is a nonprofit project. This creates an obvious tension. (Like the "suits" -- network executives -- who sensor and tamper with creative decisions in a movie or television production).

Of MOOCs and Mousetraps
February 21, 2013
From the beginning we have had logistical issues getting a large group together on a regular basis. After only three meetings, we decided to break into two main subgroups: one focusing on curricular decisions and the other on technical ones.

Collaboration is an important element, and since my last post, the instructional designers of three other MOOCs devoted to introductory composition have joined us to create a consortium to discuss best practices. Those MOOCs will also be offered this spring. Our discussions have highlighted our biggest challenge—finding an experienced MOOC instructional designer, or at least a platform specialist.

Sweating the Details of a MOOC in Progress
April 3, 2013
Our consortium’s members collectively decided to add intention statements to our syllabi, stating that our courses are not equivalent to a semester-long college-composition course. The main reason for that decision is not that we believe our courses have inferior content but that there is simply no way to adequately evaluate the writing of thousands of students—something we would need to be able to do to certify their work.

My first video, which advertises the course, took more than an hour to record. It will run approximately three minutes in edited form.

Massive Open Online Adventure -- Teaching a MOOC is not for the faint-hearted (or the untenured)
April 29, 2013
[machine-grading technologies] remain unable to provide substantive evaluation, and I recommend that those who want to learn more on the subject look into the extensive research done by Les Perelman at the Massachusetts Institute of Technology.

While it hasn't been smooth sailing, I still see this as an important adventure. I already see the potential for MOOCs to provide certain supplemental content for my traditional classes, freeing me to do more of the work that only I can do with students. This form of a hybrid classroom excites me very much.

Inside a MOOC in Progress
June 21, 2013
It is exciting to see students forming communities within the discussion forums, to help one another with questions about content or technology. Our more ambitious students have developed study guides. Some self-identified writing-and-communication instructors have formed their own forum, to consider how they can use our course to teach their own students.

The most rewarding aspect of the course is the weekly “Hangout” session, live-streamed using Google Air.

... students (with limited and expensive Internet access) have complained about not being able to complete in-video quizzes when they download the lecture videos. (Those of us with experience of the Internet in developing nations would have predicted this).

My limited ability to make key pedagogical choices is the most frustrating aspect of teaching a MOOC. Because of the way the Coursera platform is constructed, such wide-ranging decisions have been hard-coded into the software—decisions that seem to have no educational rationale and that thwart the intent of our course. (The restrictions she describes regard problems with peer review).

Lessons Learned From a Freshman-Composition MOOC
September 6, 2013, 11:58 am
If we define success by the raw numbers, then I would probably say No, the course was not a success ... only 238 students received a completion certificate—meaning that they completed all assignments and received satisfactory scores.

... if we define success by lessons learned in designing and presenting the course, I would say Yes, it was a success. From a pedagogical perspective, nobody on our team will ever approach course design in the same way. We are especially interested in integrating new technologies into our traditional classes for a more hybrid approach.

With that said, I don’t think any of us (writing and communication instructors) would rush to teach another MOOC soon.

If we define success by a true and complete “open” course, I would say No, the course was not a success. I have major concerns about access and privacy in a MOOC format. In many situations, “free” simply isn’t free.

Our MOOC has ended, but a larger, more positive conversation is just beginning.

(Earlier posts about MOOCs).

Sunday, September 08, 2013

Nature study: Gaming improves multi-tasking skills in older subjects

As Socrates, Doug Engelbart and many others have noted, we shape our tools, then they shape us. Writing was a useful invention, but our memory has suffered from it. Calculators are handy tools, but my students are not so great at doing arithmetic in their heads.

Like any other medium, the Internet is changing our cognitive abilities. We remember fewer phone numbers now that we have smart phones and when online we read quickly, superficially and carelessly, focusing our attention on the upper left hand portion of the screen. Our attention spans have been reduced and there is evidence that multitasking is inefficient.

But, it is not all bad news. Anyone who has watched a child construct 3-D worlds in Mindcrafter knows that they are masters at storing mental models of complex structures and navigating through them. There have also been studies showing that video game players are faster at making some kinds of decisions than non-gamers. (I bet basketball point guards, baseball players and football quarterbacks are great video game players).

A team of researchers led by Adam Gazzaley at UC San Francisco has just published a study showing improved cognitive ability in older people after practicing with a specially designed video game called NeuroRacer.

Playing the game helped older people multitask by improving their working memory and sustained attention. As their skills increased, so did activity in the prefrontal cortex of the brain, which is associated with cognitive control, in a manner that correlated with improvements in sustained-attention tasks. Activity also increased in a neural network linking the prefrontal cortex with the back of the brain.

The study was published in Nature and you can hear a podcast on it here and see a video summary here.

Studies like this one strike me as rather "brittle" -- focused on narrow abilities from which it hard to generalize, but the research is just beginning.

Saturday, September 07, 2013

Surveying the (changing) experience of our students

I teach a class on digital literacy in the Internet era, and the backgrounds and digital experience of my students change every year. (See the Beloit College Mindset List for general changes in student backgrounds).

As such, I start the term with a student background survey, which serves three purposes 1) I use the survey results to assign the students to study groups 2) to let the students see how their background compares with others and 3) to provide an example of an Internet-based service -- online survey processing -- which we can discuss in class.

You can see this term's survey results here. (I've omitted the answers to open-ended question like what is your major (they vary considerably), what audio, video or image editor do you use (most answer "none" to all three) and what is your relevant work experience).

I would appreciate suggestions for changes to my survey -- what else would be useful and informative to know?

Do you conduct a similar survey? If so, what do you ask?


Update 9/8/2013

Thirty three freshmen, one sophomore, seven juniors and fourteen seniors completed the survey. Their majors were:

Tuesday, September 03, 2013

It's not easy to leave Godaddy

A few years ago, I started a blog for old athletes. It has been dormant for a while, but I hope to restart activity on it some day, so I have hung on to the domain name

It was about to expire, and I decided to move it from Godaddy to

It turns out that to do that, you have to turn off domain locking and jump through a few other hoops. I jumped through the hoops, but when I tried to unlock the domain, I got this dialog box on the Gogdaddy site:

No problem, I just clicked the off radio button, but as you see, the option to save that setting turned off.

Its not easy to get out of Godaddy's roach motel.