Friday, April 01, 2016

The Tesla Model 3 reminds me of the original Macintosh, but Elon Musk does not remind me of Steve Jobs.



The Mac and the Tesla Model 3 have a lot in common. For one, the Model 3 was not the first electric car or Tesla's first electric car and the Macintosh was not the first computer with a graphical user interface (GUI) or Apple's first GUI computer, but both came out at just the right time.

Workstations, the Xerox Star and the Apple Lisa all had GUIs before the Mac, but they were too expensive and remained niche products. When the Mac came out, technology had just improved to the point where a consumer computer with a GUI could gain a foothold and catch on. The Mac, with its proprietary hardware and software, was the only GUI game in town for several years until technology improved to the point that commodity hardware could support a GUI and Microsoft brought out Windows 3 and 3.1.

Similarly, electric cars, including Tesla's, preceded the Model 3, but they were too expensive and inconvenient to grow beyond a niche market. Battery, material and other technologies have now improved to the point where the $35,000 Tesla will appeal to the mainstream. One does not have to care about the environment or global warming to like it. It's sensors, safety features, comfort, size and increased battery capacity (coupled with more charging stations and home chargers) and the ability to be upgraded via software download will appeal to a wide market, including owners of gasoline-powered cars.

As it was for the Mac, the timing is right for the Model 3. it's not too soon and not too late, but just right, like Goldilocks.

As technology improved, personal computers with GUIs became ubiquitous. The transition away from gasoline will take longer than the transition from command lines to GUIs because of the longer replacement cycle for cars, but the tipping point has been reached.

There are also financial parallels between the two. Tesla bankrolled the Model 3 from sales of the Roadster and Models S and X and Apple bankrolled the Mac with sales of the Apple II. Both were fading when the new machines came out. Tesla's stock is now 60% above its price on February 12 and the Apple II was running out of gas when the Mac was delivered.

Both companies developed comprehensive proprietary designs. Apple built the hardware and software for the Mac and Tesla is making the car and the batteries.

But that is where the similarities end. Apple holds on to their hardware and software innovations, protecting them with patents and law suits. Not Tesla. On June 12th 2014 Tesla released all of their 249 patents, saying they would not sue anyone for using their technology in "good faith." As shown below, they took down the plaques on their "wall of patents" after releasing them, replacing them with an image and the slogan "OEMS all our patent are belong to you." (I think Yoda wrote that for them).

Tesla's "wall of patents" before and after (image source)

It seems that Elon Musk sees other car and battery manufacturers as collaborators in the effort to replace gasoline-powered cars rather than competitors.

Disclaimer -- I am kind of an Elon Musk fan boy and make my students watch these videos of Musk being interviewed by Sol Khan, announcing the formation of Tesla Energy and recruiting engineers for the SpaceX satellite Internet project.

-----
Update 4/7/2016

In a blog post entitled The Week that Electric Vehicles Went Mainstream, Tesla says they received 325,000 reservations for the Model 3 in the first week. They also say that translates into about $14.5 billion in sales if all the reserved cars are purchased.

The base price of the car is $35,000, but these figures average out to nearly $45,000 per car. Elon Musk indicated that the base price included all of the sensors and software, but they clearly expect to sell relatively expensive accessories like a second motor for four-wheel drive and additional batteries for extended mileage per charge.

Friday, March 11, 2016

The Khan Academy -- on the Internet or a LAN near you

The Khan Academy began when hedge fund analyst Sal Khan started posting short, conversational videos on YouTube to help his cousin with her math class. The videos went viral. Today there are 23 courses in math, 7 in science, 4 in economics and finance, 25 in the arts and humanities, 3 in computing and preparation for 8 tests like the SAT along with content from 25 high-profile partners.

The Khan Academy is a non-profit organization that promises to provide a world-class education that is "free for everyone forever," and their open source software is available on GitHub. Over 39 million "learners" have used the material and it is being translated into 40 languages.

As shown below, the courses are comprised of fine-grained modules focused on a single concept and each module includes a test of mastery. The modules are arranged hierarchically, and a student has not completed the course until he or she has mastered a module -- they encourage experimentation and failure, but expect mastery. (Getting a C in a typical college course means the student understood only about half of the material and will do poorly in classes for which the course is a prerequisite -- an effect that compounds throughout college and into the workplace).

Portion of the beginning arithmetic course knowledge graph

In addition to the teaching content, the Khan Academy software presents a "dashboard" that enables a teacher, parent or other "coach" to monitor the progress of a student or class. The red bar shown in the dashboard view below indicates that a student is stuck on a given concept. The teacher can then help him or her or, better yet, have a student who has already mastered the concept tutor the one who is stuck. (Research shows that the tutor will benefit as well as the tutee -- "to teach is to learn twice").

Dashboard with fine-grained progress reports

The dashboard enables a coach to adapt to the strengths and weaknesses of each student and spot learning gaps. They understand that students may be blocked by one simple concept, and sprint ahead once it is mastered. (I recall sitting in freshman calculus class, and being totally lost for half the term, until I figured out what the teacher meant when he said "is a function of" and the class snapped into focus).

Confusion on a single concept "blocked" this student.

The third major component facilitates community discussion among the students taking a given class, allowing for questions, answers, comments and tips & thanks.

Tracking student participation in the course community

But, what if you don't have Internet access?

Learning Equality grew out of a project to port the Khan Academy software to a local area network at the University of California at San Diego. Their version, KA-Lite, can be customized for an individual learner, classroom or school running on a Linux, Mac or Windows PC as small as a $35 Raspberry Pi.

KA-Lite is three years old and has been used in 160 nations by over 2 million learners from above the Arctic Circle to the tip of Chile and translations are under way into 17 languages. The following shows organizations that are deploying it and installations.

There is an interactive version of this map online.
To learn more, visit their Web site and contribute to their Indiegogo campaign.

See this companion post on MIT's Open Courseware, which is also available off line.

For the history, pedagogical philosophy, accomplishments and future of the Khan Academy along with a video collage showing examples of their content, see this 20-minute talk by Sal Khan:


Wednesday, March 09, 2016

MIT Open Courseware -- on the Internet or a mirror site near you

The grandaddy of online education is 15 years old.

MIT's Open Courseware project (OCW) has been offering free, open courseware under a Creative Commons licence for 15 years. About 2/3 of tenure track faculty at MIT have put material from over 2,300 courses online and they are viewed by over 1.5 million unique visitors per month (monthly statistics here).

There are courses from 31 departments and it is not all engineering and science -- the schools of Management, Humanities, Arts, and Social Sciences and Architecture and Planning all have OCW courses.

The format varies from course to course, each offering at least one and perhaps all of the following: video/audio lectures, student work, lecture notes, assessments, online textbooks or interactive simulations.

OCW users are pleased -- 80% rate OCW's impact as extremely positive or positive, 96% of educators say the site has/will help improve courses and 96% of visitors would recommend the site. (My guess is that these figures are dependent upon which course the person had taken since the quality and quantity of material varies from course to course).

The most appealing facet of OCW for me is their Mirror Site Program, which provides copies of their Web site to non-profit educational organizations that have significant challenges to Internet accessibility, inadequate Internet infrastructure, or prohibitive Internet costs.

A mirror site requires a computer with a terabyte of storage that should be accessible by students and faculty from lab or over a local area network or intranet. The courseware is regularly updated, so someone has to be available for a download every week or so and to coordinate with OCW. They recommend an Internet connection of at least 1 mbit/second for updates. The initial install (about 600 gigabytes) is typically from a portable hard drive supplied by MIT.

They currently have 368 registered mirror sites around the world (about 80% in sub-Saharan Africa) and, while most of the material is English, selected courses have been translated into at least ten languages. For example, there are 94 in Spanish.

Most courses are in English, but some have been translated.

Translation is less important for university-level courses than for primary or secondary school since university students can often read and speak English; however, MIT would be happy for others to contribute translations.

Don't forget that the OCW system and course content are under a Creative Commons license, and they encourage people to replicate the material. For example, several copies could be made available in labs run by different departments within a university and at many universities within a nation.

If you are fortunate enough to have Internet connectivity, you can browse the site and course material online. If not, consider setting up a mirror site -- contact Yvonne Ng at MIT. If you do, keep me in the loop and let me know if I can help.

See this companion post on the Khan Academy educational site, which is also available off line.

Tuesday, February 23, 2016

Apple versus DOJ

I am far from an expert on this case or security in general, but this feels increasingly like a political battle that goes beyond this phone and this case. It is an issue between the FBI, which would like to see Congress pass a law to insure "backdoors" on all phones to allow access and decryption and Apple and most other tech firms that oppose such a law.

I believe both sides are sincere. The FBI believes they could better guard us against terrorists (and drug dealers and other criminals) if they could get a warrant to search any phone, as they can a car, home, etc. Apple believes that since the US is the current world leader in encryption technology, we are better off without such backdoors because the "keys" would be discovered by others and other governments, for example the Chinese, might press for backdoor access in selected cases. (Apple has also invested a lot in a pro-privacy marketing image).

There is no obvious correct answer and there will be unintended and unforeseen consequences regardless of the outcome. Only one thing is clear -- this is not a matter that should be decided by supporters of Donald Trump or Bernie Sanders or Hillary Clinton.

For more, see this article, which has links to statements by both sides and this Pew survey on public opinion:


Friday, February 12, 2016

Sci-Hub, a site with open and pirated scientific papers



Sci-Hub is a Russian site that seeks to remove barriers to science by providing access to pirated copies of scientific papers. It was established in 2011 by Russian neuroscientist Alexandra Elbakyan, who could not afford papers she needed for her research. She was sued by Elsevier, a science publisher, and enjoined to shut the site down, but she has refused to do so.

The site claims links to over 48 million journal articles, so I decided to try it out by searching for the title of an article I had just read: "A technological overview of the guifi.net community network." The paper was published by Elsevier and costs $35.95 to download if you are not from an organization with an Elsevier account.

My search generated an indirect referral to Google Scholar, which returned the following error message:


The request had timed out, probably due to latency in the Google search plus transit time to Russia. It returned an error message with a link (arrow) and the suggestion that I try again, so I did. This time, it returned a captcha screen:


After the captcha, it retrieved a PDF file with the full article as it had been published.

The site is inconsistent. I tried it for a couple of the articles I have published in the Communications of the Association for Computing Machinery (ACM), which are online behind a paywall. It found the Google Scholar references, but was not able to retrieve the articles.

However, it was able to retrieve some of my ACM articles that other people had managed to liberate and post on their own Web sites and it found drafts that are on my Web site. It must do a Google Web search as well as call on Google Scholar.

The best way to use Sci-Hub is to find the Digital Object Identifier (DOI) of the publication you are looking for before you go to Sci-Hub. The DOI is a standard, persistent identifier of scientific paper or other digital object and, if you have it for the paper you are seeking, you can simply enter it into the search box on the Sci-Hub home page.

Many publishers and organizations assign DOIs to their material and you can often find them in databases like PubMed or on the Web sites of the publisher, like the ACM Digital Library.

Sci-Hub openly violates copyright law, is slow and clumsy to use and the access is inconsistent, but what alternative does a researcher in a developing nation or at a relatively poor university or other organization in a developed nation have? There are a number of open scientific publication sites, and my guess is that they will prevail in the long run, leading publishers like ACM and Elsevier to change their business models. But that is just a guess.

One can also imagine a world in which copyright law makes fair use exceptions for scientific research, as opposed to entertainment. I don't feel guilty about pirating a scientific paper, but am happy to pay to see Star Wars.

Finally, after visiting this site, one cannot help thinking of the case of Aaron Swartz, who committed suicide as a result of prosecution for his attempt to free scientific literature.

Tuesday, January 19, 2016

Two cool podcast interviews on bit rot, an unsolved problem

We all need to be aware of the problem of bit rot in our work.

One of my favorite podcasts, OnTheMedia, produced a program called Digital Dark Age (52:14) last year. It consists of several segments on the problems of protecting and archiving the vast amounts of data we are generating. If you don't have time to listen to the entire podcast, at least check out two segments, interviews of Vint Cerf on information preservation problems (6:28) and Nick Goldman on using DNA as a storage medium (9:02).

Let's start with the Goldman interview. He notes that applications like video entertainment and scientific research are generating immense amounts of data, which is already overwhelming today's optical and magnetic storage media. Goldman is experimenting with DNA as a very high capacity, long lived data storage medium. How might that work?

A strand of DNA is made up of strings four bases abbreviated A, T, C and G, as shown here:

The DNA strand is like a twisted ladder where
the "rungs" are either A-T or C-G bonds. (For
more, see this animation).

Biologists have developed equipment for sequencing (reading) the list of bases making up a strand of DNA and for synthesizing arbitrary strands of DNA. That makes it possible to store a copy of a binary file in a strand of synthesized DNA, for example by synthesizing strands of DNA in which binary 0s are represented by an A or C base and 1s are represented by a T or G. A DNA sequencer could then convert those A, T, C and Gs back into 1s and 0s.

Goldman thinks we will see expensive DNA storage devices in three or four years and they will be cheap enough for consumer storage in 10-15 years. DNA stored in cool dry places will last hundreds of thousands of years and "all the digital information in the whole world everything that's connected to the Internet" will fit in "the back of a minivan." If Goldman falters there are other DNA storage projects at Harvard (article, video) and Microsoft.

But even if DNA or some other storage technology gives us dense, cheap storage, there are other problems, as outlined by Vint Cerf.

Cerf talks about "bit rot." The simplest type of bit rot is media deterioration -- becoming unreadable after 20 0r 30 years. That can be overcome by creating a new copy periodically, but that is not be enough. Let me give you a personal example.

In 2004 my wife had a small hole in the atrial wall of her heart repaired. In a 30 minute, outpatient procedure a skilled surgeon installed a small device in which tiny umbrellas were clamped over the hole, held there by a spring.

A ballon determines hole diameter (left), the device in place (right)

That was pretty amazing, so I asked for a video of the procedure, which was provided on an optical CD. The CD also included a program for viewing the video, the Ecompass CD Viewer. The CD media might be somewhat deteriorated by now, but I have transferred the program and data to magnetic storage, so I can still view it on my laptop.

But, my laptop is running Windows 7. The version of Ecompass CD Viewer I have was written in the Windows XP days. It still works, but will it be compatible with Windows 10 or later? If not, I could upgrade to a current version, but, as far as Google Search knows, Ecompass CD Viewer is no longer with us -- perhaps the company went bankrupt or dropped the program.

Ecompass CD Viewer works by stringing together clips stored in a video file format called SSM. If I could find an SSM viewer, maybe I could cobble together the entire video since I have the SSM clips. That might work for these static files, but we need the functionality of the original program for something that a user can interact with, like a spreadsheet. Looking further in the future Windows will disappear along with Intel-processor machines.

Cerf does not see a complete solution to the bit rot problem, but he pointed to Project OLIVE at Carnegie Mellon University as a significant step in the right direction. OLIVE emulates old computers running old software on virtual machines. Below, you see an image from a demonstration of an emulation of a 1991, OS7 Macintosh running Hypercard. The display, mouse and keyboard of the Dell PC are interacting with the simulated Macintosh, which is running on an Internet server.

Still image from a Project OLIVE demo video.

The OLIVE demonstration is impressive, but it is a research prototype that assumes standard input/output devices and capturing the vast number of programs and hardware configurations that exist today would require a massive effort. Similarly, DNA storage is at the early proof-of-concept stage. Both feel like longshots to me, so, for now, we all need to be aware of the problem of bit rot in our work.

Monday, December 21, 2015

Yahoo relies on annoying ads and specialized Websites

Will Internet news go the way of newspapers?

December 17 was the anniversary of President Obama's call for change in our Cuba policy. That milestone sparked a lot of coverage in the mainstream press, which I discussed in a post on my blog on the Cuban Internet.

The most extensive Cuba coverage I saw was a week-long series of posts on Yahoo -- U. S. and Cuba, One Year Later. The series has many well written posts on various aspects of Cuban culture and the political situation. Most are human interest stories on tourism, fashion, baseball, etc., but several were Internet-related.

The posts are not detailed or technical, but they are well written for a general audience -- like newspaper readers. (Remember newspapers)?

That is the good news.

The bad news is that the posts are overrun by annoying ads and auto-play videos. This is illustrated by the series "home page," shown below.

The series table of contents -- can you spot the ads?

Most of the elements in this array of phone-sized "cards" consist of an image from and link to a story on Cuba, but, if you look carefully, you will see that several of them link to sneaky ads. It's like Where's Waldo -- can you spot the ads?

In an earlier post, I suggested that advertising-based, algorithm-driven Internet news might be increasingly redundant and concentrated in high-volume sites. The advertising revenue is used to pay talented writers, photographers and videographers who are capable of producing timely news coverage of a story like this one.

But, people are fed up with those ads and increasingly deploying ad blockers.

Reasons people turn to ad blockers

People are circulating manifestos and Google and Facebook are proposing standards to improve the advertising and speed of the Web, but will those moves cut Yahoo's revenue?

Furthermore, mainstream media like Yahoo rely to some extent on specialized, "long tail" sources, like my Cuban Internet blog. Yahoo interviewed me (and many others) and took material from some of my posts in preparing their coverage. But, will specialized blogs and sites continue to exist? Losing focused sources would increase the cost of mainstream media stories.

I really liked Yahoo's coverage of Cuba one year later, but I wonder if they will be around for Cuba five years later.

-----
Update 12/27/2015

ASUS will include the AdBlock Plus ad blocker in their proprietary browser. Apple is now allowing ad blockers on iOS devices. Will Firefox be next? Microsoft Edge?

Thursday, December 10, 2015

Google Fiber considering Los Angeles and Chicago


Will Google free me from the evil clutches of the dreaded Time Warner Cable?

Google's first foray into municipal networking was connecting 12 square miles of Mountain View California in 2007. In 2010 they issued a call for proposals from cities wishing to participate in an "experiment" called Google Fiber, which would offer symmetric, 1 Gbps connectivity to customers. In 2012, Kansas City was selected as the first Google Fiber city.

But, was it an experiment? An attempt to goad ISPs to upgrade their networks? The start of a new Google business? In 2013, Milo Medin, who was heading the Google Fiber project, said that they intended to make money from Google Fiber and that it was a "great business to be in."

Today, Google Fiber is operating in three cities and they are committed to installing it in six others. Eleven cities, including Los Angeles and Chicago, have been invited to apply.

Google is considering big cities Los Angeles and Chicago.

Los Angeles and Chicago were just added to the list and it is significant that they are the first very large cities -- both in population and area -- on the list.

Since the initial installation in Kansas City, Google has codified the city-selection process in an informative checklist document. Google knows they are offering a service that will benefit the city in many ways, so the checklist is essentially the guide to an application form in which the city has to offer access to poles and tunnels, 2,000 square-foot parcels for equipment "huts," fast track permitting, etc.

I expect that Google will also have their eye on the Los Angeles tech startup community and entertainment industries. While Google Fiber does not seem to be a mere "experiment," they will doubtless enable and discover new applications that captialize upon gigabit connectivity (and increase Google ad revenue).

Rollout order within a selected city is governed by the willingness of residents of a neighborhood to sign up for the service. High demand areas get high priority. But, this can exacerbate the digital divide within the city -- serving wealthy areas before poor areas. Google encountered this problem in Kansas City. As shown below, wealthy neighborhoods (green) committed before the poorer areas, so Google initiated programs to reach out to them.

Wealthy KC neighborhoods committed early.

Based on that experience, they now consider inclusion plans in the application process and hire city-impact managers for fiber cities. They also offer very low-cost copper connections for those who cannot afford fiber.

I am not familiar with the situation in Chicago, but Los Angeles has been pursuing fiber connectivity for some time. The city issued a request for proposals for city-wide fiber two years ago, and last year CityLinkLA was formed with the goal of providing "basic access to all for free or at a very low cost and gigabit (1 Gbps) or higher speed access at competitive rates." The effort has been led by Los Angeles Mayor Eric Garcetti and Councilman Bob Blumenfield and they are working with both Google and AT&T toward that goal.

I assume that AT&T will upgrade their current infrastructure to DOCSIS 3.1 in order to achieve faster speeds over copper running from fiber nodes to individual premises, but they only serve a portion of Los Angeles. Other areas may have to wait for Google. It seems that Verizon gave up on their fiber offering, FIOS, some time ago.

Now for the belated full-disclosure. I live in Los Angeles, and am hoping that competition between Google or AT&T or someone will one day free me from the evil clutches of my current monopoly broadband service provider, the dreaded Time Warner Cable.

Friday, December 04, 2015

The Los Angeles startup scene – looking back

The recent LA Startup Week reminded me of the early days of personal computing and of the meetup that may have initiated the LA startup community.

When personal computers were getting off the ground in the 1970s, hobbyists and budding entrepreneurs met at places like the Homebrew Computer Club in Palo Alto or the Southern California Computer Society in Los Angeles to talk about what they were doing and to help each other out. There was idealistic, shared enthusiasm – we knew PCs were totally cool and were a "big deal."

The line between competition and cooperation among entrepreneurs was fuzzy. So was the line between entrepreneurs and hobbyists. Competitors met for beers after meetings to talk about the boards they were designing and I recall Steve Wozniac offering copies of the schematic and parts list for his wire-wrapped computer at the Homebrew club while Steve Jobs was behind a card table offering to sell them assembled.

The early Internet days felt the same – the Net was clearly revolutionary and it was born in an academic community with a culture that valued sharing and collaboration. The National Science Foundation backbone network (NSFNET) was established "to support open research and education in and among US research and instructional institutions" – for-profit traffic was not acceptable. (NSF also established an International Connections Program, which subsdised NSFNET links from academic and research networks in other nations).

The LA startup scene

In April, 2012, I attended the first meet-up of Tech Entrepreneurs of Venice, CA. It felt like the early PC days. There were no business suits or fancy offices. As you see below, we met in a warehouse (thanks to MovieClips), and stood around drinking beer and talking. But, the attendees were not hobbyists and the focus was on business opportunities, not research and education.

Venice is a small part of Los Angeles, and might have seemed an unlikely place for the birth of the next "Silicon X," but it has a history of avant garde community and culture.

Venice was the heart of the LA beatnik community in the 1950s and the counter culture of the 1960s. The Peace and Freedom party was founded in Venice and it was the center of the LA art scene. The design studio of Charles and Ray Eames was also in Venice. The Eames were design rock stars who IBM turned to for their World Fair pavilion and many IT and education exhibits. They even designed the look of IBM mainframes -- the Eames put the blue in "Big Blue."

I don't know if that meetup marked the founding of "Silicon Beach" (a name also claimed by Miami), but it felt like it.

Homebrew Computer Club meeting in Palo Alto

SCCS Interface, published by the
Southern California Computer Club

First Venice tech entrepreneurs meetup

This building housed the Eames studio

Inside the Eames studio

Los Angeles startup week

I recently attended several events during LA Startup Week and, even though I live in Los Angeles, I was surprised at the scope of the LA tech community. RepresentLA maintains a multi-layer, interactive map showing that greater Los Angeles is (currently) home to 1,099 self-defined Internet startups along with accelerators, incubators, co-working spaces, investors, consultants and hacker spaces that support them.

Over 1,000 startups and organizations that support them

The startup week featured dozens of sessions with how-to talks, demos, pitches by organizations that support startups, etc. Kevin Winston of Digital LA kicked the week off with an overview presentation on the startups, the organizations that support them with capital and services and companies like his that support the community. He said only New York City and Silicon Valley have more startup activity than LA and he made the point by listing examples of successful companies, big buyout deals and the numbers of startups and supporting organizations.

Kevin Winston presenting an overview of the startup ecosystem.

Winston’s company, Digital LA, organizes events and tracks the Los Angeles startup scene – kind of a digital chamber of commerce.

Represent LA lists 30 accelerators, 44 incubators and 52 co-working spaces. The difference between accelerators and incubators can blur, but both are intended to help startups get off the ground. Accelerators provide space, mentoring and capital to startups in return for equity in the company. They typically admit a cohort of companies for a limited time – perhaps ten companies for four months -- then admit a new cohort.

Incubators are similar to accelerators, but companies may come in at an earlier stage – accelerators often want a company that has a product and is generating revenue – and they are more flexible on the time the company stays in the incubator.

Note that some of these are focused on a specific industry – for example the LA Dodgers' accelerator focuses on sports related applications.

The services provided by co-working spaces also overlap with those provided by accelerators and incubators. In a co-working space, you pay rent for the facilities you need and hopefully meet people with common interests and complementary skills.

Some of the startup week events were held at the Cross Campus co-working space, shown below. Cross Campus will rent startups everything from a virtual office – just a mailing address and the right to schedule meetings or demos – to a suite of ten or more offices. A full-time dedicated desk rents for $500 per month.

The Cross Campus co-working space

The open space and list of amenities conveys the startup culture and values – here are some of the amenities: 7-times filtered Flowater, free craft beer, standing desks, only low VOC cleaning products used, weekly on-site massage therapist and, of-course fast Internet connectivity. (I'm a misfit – had to Google "low VOC cleaning products").

My guess is that the most valuable thing one gets out of participation in a co-working space, accelerator or incubator is meeting people with complementary interests, values and skills.

Los Angeles also has willing investors. The Represent LA site lists 72 angel and venture capital investors and Winston gave examples of a number of high-valuation investments during the last year.

There are also 17 hacker spaces open to hardware startups and 287 consultants – offering technical and business assistants. (Many of the latter also offer their services as accelerator and incubator mentors).

If you would like to keep up with the happenings in Los Angeles, subscribe to the newsletters and event listings at Digital LA and the organizations that hosted the week's events: Cross Campus, General Assembly and Expert Dojo.

I was surprised by number of startups and organizations that support and finance them in Los Angeles. The emphasis was on making money from startups – more MBA than hacker. The business emphasis reminded me of athletes who dream of playing professionally in the big leagues – I wonder how many of the startups will be around in a year.

The week also reminded me of the early PC days in Los Angeles and Silicon Valley and of a meet-up in Venice, CA that may have marked the beginning of the LA tech community, so I wrote a companion post on those times.

-----
Update 12/11/2015

Mayor Eric Garcetti launched an initiative aimed at upgrading the Los Angeles Internet infrastructure soon after his election and it may pay off. AT&T has announced vague plans for fiber rollout and, more interesting, Google is considering Los Angeles for their 1 Gbps Google Fiber.

Google Fiber is an important asset, so Google can negotiate for concessions, but the city has some negotiating power too. One of Google's goals is to spur the development of unique applications that require high-speed connectivity, which the Los Angeles tech startup community and entertainment industries may very well invent. For more on Google Fiber and their consideration of Los Angeles, see this post.

Thursday, November 12, 2015

Is .CO the new .COM?

Can .co do for Colombia what .tv has done for Tuvalu?

I just checked the cost of the domain name Larrypress.com. Name.com will sell it for $799 and Godaddy.com lists it for a bargain $695 -- too much. The country code top-level domain for Colombia is .co, so I checked that out and found a Miami based registrar, Pop.co, offering the first year free -- I snapped up Larrypress.co.

But it wasn't really free. The name is free for the first year, but being able to manage the DNS costs $2 per month, so the real cost is $24 per year and the marketing gimmick seemed cheesy, so I checked with Hover.com, a registrar I've dealt with in the past. They charge $24.60 for the first .co year and contracts for 2-5 years are $51.20, $77.80, $104.00 and $131.00. (I wonder what the algorithm is for calculating those small, uneven increases).

So, if you are willing to settle for .co rather than .com, you can save quite a bit.

Can .co do for Colombia what .tv has done for Tuvalu?

The island nation Tuvalu has profited from their country code domain name, .tv. In 1999, the Tuvalu government licensed .tv for $1 million per quarter with a $50 million cap within 12.5 years. They also retained 20% equity in the licensing company. Subsequently, Versign took over and, in 2012, renewed its contract to manage the .tv registry until December 31, 2021. The terms of the current agreement with Verisign were not announced.

There is money to be made on .co as well. In March 2014, Neustar paid $109 million for .CO Internet and they are trying to go beyond name registration to form a community of .co domain owners -- featuring promotional videos of .co company founders on their web site and offering coupons for discounts on products, services, conferences, etc. They also promote the .co brand -- running ads and lining up large companies for one-letter domain names. For example, Twitter uses t.co for URL shortening, so millions of people see it every day.

They also pay the Colombian government a fee. Colombia has had a difficult time the last 25-30 years with revolutionary guerrilla armies, government death squads, drug cartels, murder and kidnapping, but the Economist says Colombia is close to a historic peace agreement that will transform its prospects. Perhaps popularizing the .co domain name is a small part of that transformation.

----
Update 11/12/2015

Here is a short video promoting the notion of a community of .co companies:

Wednesday, October 07, 2015

Communicating emotion and presence -- from a Hole in Space to Cuba's public-access hotspots

How about a Hole in Space between Havana and Miami? Between Jerusalem and Gaza City?


Cuba, one of the least connected nations in the world, has recently created 35 public-access, WiFi hotspots around the island. While 35 hotspots is a drop in the bucket, this opening is a start and it has been noted in many articles and blog posts.

Miami Herald photo
Most of the coverage of the new hotspots has been lackluster and redundant, but an article in yesterday's Miami Herald stands out because it stresses the human and emotional impact of these access points. The article describes people showing a new baby, a woman talking with her husband in Miami or a little boy telling his father he loves him. Baruch College Professor and Cuba scholar Ted Henken is quoted in the article as saying:
Cubans are living out some of their most personal moments — family reunions and introductions to new babies and spouses — not in the intimacy of their own homes but in public plazas and parks.
Raúl can relax -- the people are using the Internet to communicate with loved ones, not to organize political rallies.

This reminds me of an often overlooked, pioneering project. In 1980, artists Kit Galloway and Sherrie Rabinowitz created a "Hole in Space" by connecting larger-than-life displays in New York and Los Angeles with a satellite feed. It was the mother of all video chats, demonstrating that electronic communication could convey presence and emotion.

Check out the following five-minute excerpt from a video documenting the event:



If you liked that, watch the full half-hour video:



Hole in Space was created more than a decade before we saw the first, simple version of the World Wide Web and Rabinowitz and Galloway were artists, not computer scientists. Products begin with a vision. In this case, Rabinowitz and Galloway had the vision and built the engineering prototype demonstrating its value. As the saying goes "demo or die."

Hole in Space was only one of their projects. For an overview of a quarter century of Rabinowitz and Galloway's work, see the Electronic Cafe International archival Web site.

Today, video conferencing is ubiquitous -- it has even reached Cuba -- but our video chats are on small screens. I'd love to see "Hole in Space, 2015," using today's technology. Large, public advertising displays are common and they can be linked over the Internet. Wouldn't it be cool to punch a lot of holes in space?

Where would you put the displays? For a start, how about one between Gaza City and Jerusalem or between Havana and Miami? Kickstarter anyone?

Thursday, October 01, 2015

Current event presentations

I have been experimenting with supplementary presentations in my undergraduate Internet literacy course this term. The presenations are generally triggered by a current event that is relevant to our class. They consist of a handfull of annotated slides, most of which have an image and just few words and questions, so I expect the students to study the notes accompanying them after I present them in class.

I've presented these so far this semester:

Feel free to use them and suggest others.

Here are a couple random slides to illustrate the presentation style:




Monday, September 14, 2015

High school kids are taking more online classes

The UCLA Higher Education Research Institute conducts annual surveys of incoming college freshmen. There are many interesting questions, but since I teach an undergraduate course on Internet literacy, and education applications and services are increasingly important, I focus on two questions in particular:

  1. Have you used an online instructional website (e.g., Khan Academy, Coursera) as assigned for a class?
  2. Have you used an online instructional website (e.g., Khan Academy, Coursera) to learn something on your own?
The following table shows the percent of incoming freshmen who answered frequently or occasionally:


Three things strike me in looking over these results:
  1. Students are not waiting for their schools -- they are taking online classes on their own.
  2. Students attending historically black colleges are doing more online study than others.
  3. Online classes are growing in popularity among students working on their own and as assigned work.
How would you explain these observations? Do you expect them to continue?
(I also wrote a post after last year's freshman survey).


My grandson took a break from Minecraft to complete
Khan Academy algebra 2 before starting high school. 











Wednesday, September 02, 2015

Is Internet news inevitably concentrated and redundant?

I applied to register my blog on the Internet in Cuba with Google News, but was immediately (automatically?) rejected. My blog had 16,265 views last month and a friend told me a site had to have at least 150,000 views per month to be considered for Google News.

I don't know if that is the case, but I frequently post news before it turns up in Google News or as a Google Alert. (The two systems are separate).

Here's an example:

On August 27th, at 2 AM PDT, Bloomberg News published an article entitled "Cuba's Internet Dilemma: How to Emerge From the Web's Stone Age." by Indira Lakshmanan.

The article included a photo, a graph and two ads for IBM. In spite of being short, it had six bold-face section headings. I did not post anything about the article because it offered no news and had a significant misconception.

Later that day, I received a Google Alert for an article with the same title on the SunHerald site. The SunHerald version had a different photo and had cut the graph and sub-heads, but the body text was identical to the Bloomberg version. The byline credited "INDIRA A.R. LAKSHMANAN of Bloomberg News." Notice that she now had two middle initials and her name was all caps.

There was another credit at the end of the article: "Brian Womack contributed from San Francisco."

Brian did not change the body text, but he dropped the section headings and graph, changed the photo and gave Indira two middle initials and capitalized her name. Did his "contribution" take more than ten minutes?

The SunHerald version was preceded by a full-screen ad that I had to click to remove and, when the article was displayed, it had ads for liposuction, a bail bond service, a microwave oven (I had purchased one from Amazon earlier in the week), a smiling, blond investment adviser and two ads for the SunHerald publisher.

Well, this got my curiosity up, so I picked a random sentence from the middle of the article: "Last month, the state telecom monopoly ETECSA created 35 broadband Wi-Fi hotspots across the island, where the public can surf the Web, as Hernandez does" and Googled it. It turned up 18 full-text copies of the post.

Searching on the first sentence of the article: "Julio Hernandez is a telecommunications engineer, but like almost anyone else in Cuba who wants to get on the Internet, to do so he must crouch on a dusty street corner with his laptop, inhaling car exhaust and enduring sweltering heat", turned up many more hits, but a lot of those were snippets with links to a full-text version.

I searched Google Alerts on the key phrase "Cuba Internet" and turned up links to six full-text copies of the story: 1, 2, 3, 4, 5 and 6

I sent email queries asking about the criteria for inclusion in Google News to press@google.com and Stacie Chan, Media Outreach Manager, Google News, but neither replied. I also emailed the Bloomberg press office asking if they had licensed the article to the SunHerald and others, but received no reply.

This experience leads me to think that:
  • Google's News and Alerts algorithms find stories on a topic, not necessarily news or novel analysis.
  • Google's News and Alerts algorithms fail to detect redundancy, which may be intentional because it increases their ad revenue. (Might they discriminate in favor of sites with ads)?
  • Google's algorithms seem to pay attention to sub-headings and images, but not body text, in screening for redundancy.
  • Snippet posts often link to derivative copies rather than the original post (by Bloomberg in this case).
  • Relatively small, focused, long-tail blogs and news sites are not likely to be seen by Google News or Alerts.
Is it inevitable that advertising-based, algorithm-driven Internet news will be redundant and increasingly concentrated in high-volume sites?

I hope not. Perhaps Google (or Facebook) will be clever enough to automate discovery of worthwhile long-tail news sites or use human curators to find them.

Disclosure: I have given permission for copies of blog posts to be posted by others (at no cost).

-----
Update 9/5/2015

As mentioned above, I sent email queries to Google when I got the idea for this post, but did not hear from them before I published it. This morning I got an email from Stacie Chan. Here is what she said:
There aren't any minimum number of clicks needed to get accepted as a News site into Google News. We do, however, have strict quality and technical guidelines that sites must follow to get and accepted and maintain their status in Google News. We accept smaller blogs/sites as well as larger ones.

Google Alerts, as you mentioned, is a separate product from Google News. But many of their sheets are triggered by a new article from publishers on our database.

Hope that helps!
Stacie
(I corrected what appeared to be two cell-phone typos).

-----
Update 10/3/2015

I follow the keyword "ETECSA" on Google Alerts and Google recently alerted me to a post on Cuban plans for home Internet connectivity at Frogoff.com. (ETECSA is Cuba's state-run Internet service provider).

The post was an identical copy -- text, title and images -- to my earlier post.

I don't really care that the slimeballs running Frogoff.com copied my post, but I do care that Google Alerts linked to it, not mine. Google will not tell me how they decide which sites to include in News and Alerts, but, whatever it is, it rewards parasites like Frogoff.com and overlooks relatively small, specialized long-tail sites that cover a given topic. (In this case, the state of the Cuban Internet).

-----
Update 10/12/2015

I received the following Google Alert yesterday:


ETECSA is Cuba's government monopoly telecommunication company. The message alerts me to the fact that ETECSA upgraded their cell phone network in 1999 -- not exactly "news."

The alert links to ETECSA's Wikipedia page. Evidently, Google sends an alert to any change in a page that contains the alert keywords. The alert was not caused by the cell network upgrade, but that sentence contained the term "ETECSA." In fact, the most recent change to the ETECSA page in Wikipedia was made on August 12, so the non-news alert is two months old.

-----
Update 11/25/2015

Here is another type of Google Alert failure:


This appears to be a link to a post on the BN Americas news site about a service outage at Cuban email provider ETECSA, but it is not. Instead, it is a link to a post on a spam site called WN.com.

As you see below, the WN page has one sentence on the ETECSA outage, hidden in a sea of spam links and illustrated by a couple dancing on the beach. Can you find the link to the BN Americas article?


Can't Google come up with an algorithm or black list to filter out this sort of "news?"

-----
Update 1/31/2016

I have a longstanding interest in satellite Internet connectivity for developing nations, so have been watching SpaceX's attempts at recovering booster rockets. They tried for the third time to recover a rocket on a drone barge at sea on January 17, but failed. I watched the launch webcast and posted a note on the failure the following morning.

Since I am interested in the topic, I have subscribed to "Musk-Satellite" Goggle Alerts, and at 10:08 Sunday morning, I received one on a Washington Post article entitled "Elon Musk's SpaceX to attempt another rocket landing. This time with a twist."

Judging by the title, the article had been written before it was known that the attempt had failed, but it now says the landing failed and includes several tweets time stamped after 10:08 AM. Evidently, the story was revised after it was first posted.

But, that was just the first Google alert -- I have received 158 more since then, many of which link to two or three articles on the failed recapture. I've looked at several of these articles -- they provide no additional analysis.

They are slowing down now -- I have only received two so far today. By the time these Alerts stop, I will have received links to hundreds of redundant articles, and thousands of ad impressions.

-----
Update 3/12/2016

A blog copied everything I posted on my blog on the Internet in Cuba for years. Both blogs are on Google's Blogger site. Can't Google detect that sort of thing? Are they incented to do so -- I guess they make money on pirated click-bait ads.

Saturday, August 22, 2015

Google's OnHub WiFi router is a strategic product

During the second quarter of 2015, Google reported selling $16.023 billion worth of advertising -- 11% more than the second quarter of 2014. Advertising is their bread and butter, but "other sales" grew by 17% to $1.704 billion.

I don't know how "other sales" breaks down, but a chunk of that is hardware devices like the Pixel Chromebook, Chromecast, Next thermostat, Nexus phone and, now, WiFi routers.


Google's first WiFi router, the OnHub, is being built by Chinese manufacturer TP-Link and ASUS will have one later this year.

With the exception of the Chromecast, Google seems to be going for high end devices, but does the world need another $200 home router? Why would Google bother? I can think of a couple of strategic reasons.

For one, Google has an eye on the home automation market -- controlling things like their Next thermostat and your TV set. The OnHub has the potential to become a network-connected home automation hub -- like the Amazon Echo -- which sits in your living room and listens for commands like "turn on the bedroom lights" and "lock the front door."


The OnHub lacks the Echo's microphone, but maybe the ASUS device or OnHub v2 will take care of that.

The second strategic advantage is that, since the device is online, Google will be able to dynamically tune it for maximum performance. Google will be able to monitor your network and change GoHub parameters and firmware. You will also be able to control the network more effectively, for example, giving a streaming video in the den priority over email in your home office.

Being able to improve your WiFi performance is a nice feature and, the more time you spend online and the less time you spend waiting for content, the more ads Google will be able to show you.

-----
Update 8/24/2015

There are many comments on this post on the Slashdot Web site. Some worry about Google's ulterior motives:
They want to be able to mine your data at the lowest possible level, have a handy backdoor available in case the NSA comes calling, and so they can insert their own ads on every page of every website you ever browse.

There is no way in hell you should be trusting a Google which has remote access to your network, home automation, doors and every other thing Google thinks they're going to sell you.

They want to control your network. They want to inject advertising into everything you do. They want you to have no choice but to use DNS servers they control.

Trying to inject advertising into your internet stream would be a ham-handed approach the idiots at Lenovo would try. Google is more clever than to slit their own device's throat with something so stupid as that. (Note that Lenovo was caught injecting ads and I suggested that selling more ads would be a nice side-effect of improving speed, but agree that Google would not inject ads on their own).
These worries are contradicted by this comment:
I know a couple of people who were involved in the development of OnHub and, FWIW, they say that the motivation was that there's a need for a Wifi router that performs better and is more secure. Not a strategic bet, just a perceived market opportunity which they thought Google was well-equipped to fill.

With regard to performance, the antenna design of the OnHub is supposed to be dramatically better than anything else on the market, and the device incorporates ideas from the Software Defined Networking stacks Google developed internally for its data centers, to optimize data flow. I wouldn't have thought there was much you could do to make Wifi work better, since the ISP connection is generally the bottleneck, but apparently there is. With respect to security, it adopts a number of ideas from ChromeOS, plus fully-automated updates. Probably the biggest security benefit compared to the competition is that security is actually a primary design goal, which isn't the impression I get from makers of home routers.

We'll see if OnHub actually is enough better than the competition to justify its premium price. Based on what I know of the people working on it I expect that it will. I ordered one.
Check Slashdot out -- there are many more comments.

-----
Update 9/3/2015

The FCC is considering new rules that would ban WiFi firmware modification. I guess the FCC is worried about hacking or perhaps increasing power beyond legal limits, but, if passed, the regulations would limit the ability of companies like Google and Amazon to upgrade their Internet-connected home hubs.

The FCC is open for comments on this proposal through early morning, September 8.

Tuesday, June 23, 2015

Satellite Internet update -- Airbus will make satellites for OneWeb

OneWeb and SpaceX are using different technologies and have different organizational strategies.

Two companies, OneWeb and SpaceX, are in competition to offer global Internet access and backhaul over constellations of low-earth orbit satellites. SpaceX recently announced that they were ready to begin limited testing and OneWeb CEO Greg Wyler gave a progress report last March.

At the same time, OneWeb announced that five companies had bid on the contract to build their satellites. Airbus has won the contract. They will build 900 150-kilogram satellites, 648 of which will be used in OneWeb's initial, near-polar orbit constellation, as shown in this animated video:



The Airbus announcement raises a lot of questions, that are addressed in a terrific in-depth interview of
Brian Holz, OneWeb’s Director of Space Systems
. Holz talks about the reasons for producing the satellites in the US and the factors in choosing a factory location, the cost of the satellites ($4-500,000 each), the need to have global participation in a global project, launch services, satellite reliability and plans for eventually deorbiting them, financing and the business case, the search for manufacturers of millions of user terminals and antennas, etc.

Brian Holz, OneWeb’s Director of Space Systems

Holz and CEO Greg Wyler have experience -- they were together at O3b Networks, which is already delivering industrial-scale connectivity using medium-orbit satellites -- and are worthy competitors for Elon Musk's SpaceX effort. While seeking the same goal -- global connectivity -- OneWeb and SpaceX are using different technologies and have different organizational strategies.

SpaceX plans to have around 4,000 smaller, cheaper, shorter-lived satellites orbiting at only 645 kilometers. SpaceX will also keep more of the project in-house than OneWeb. OneWeb is becoming a coordinated coalition of partners. Virgin Galactic, Qualcomm, Honeywell Aerospace and Rockwell Collins are already on board and Airbus will not be a mere supplier, but a partner in a joint venture, which will include others for finance and marketing as well as technology.

Both projects are extremely ambitious, expensive and risky. I don't know which, if either, will "win," but the best possible outcome would be for both to succeed and compete with each other and with terrestrial ISPs.

I worry about the problems of capitalism with its massive concentration of power and income inequality, but this is an example of capitalism at its best.

-----
Update 6/25/2015

OneWeb founder Greg Wyler announced that they have received $500 million in funding from a group that includes Airbus and is seeking to raise that much or more in their next round of funding. Previously announced partners Qualcomm (communication technology) and Virgin Glactic (launch services) are also investors.

Their plan will require an estimated $2-2.5 billion, and it does not seem they will have trouble raising it. I'd like a few shares myself :-).

OneWeb founder Greg Wyler

-----
Update 4/19/2016

SpaceX has (rightfully) been in the news lately because they succeeded in recovering a booster rocket by landing it on a barge at sea. If they can do that consistently, it will dramatically lower the cost of their plan to offer Internet service using a constellation of low-earth orbit satellites.

But, let's not forget OneWeb -- they are also making steady progress toward their own constellation of Internet service satellites. OneWeb just announced that they (in a joint venture with Airbus) will build a factory to mass produce small satellites near NASA's Kennedy Space Center in Florida. (SpaceX is also designing satellites suitable for mass production).

One Web plans to place Internet satellites in 18 orbital planes at an altitude of 1,100 kilometers. They plan to build about 900 of these 150-kilogram satellites -- 720 for use and the rest spares -- at a cost of $500,000 each. (See the animation above).

They will also build satellites for other customers at the new plant and they hope to be able to launch up to 36 satellites on a single flight.