Computer Science in High School Education… at long last?

IMG_0410The Des Moines Register reports that the state of Iowa is contemplating requiring computer science coursework as part of its core high school curriculum. The team of science, technology, engineering, and mathematics experts who is recommending this to the state claim that requiring computer science coursework in Iowa high schools is a “very bold recommendation”.

The article quotes various people making statements about what this computer science coursework would tentatively include, suggesting that their rough plan is to convey to students

  • how computer systems work
  • how to write computer programs
  • computer science topics that reflect current industry needs

The first two points have been making appearances in K-12 education at least since the 1980s, in the form of Seymour Papert’s methodologies using Logo and Turtle programming, and more recently through the MIT Media Lab’s Scratch programming initiative. [I myself was introduced to computer programming at least in part in an elementary school Logo/Turtle programming class circa 1990.]

The third point sounds delightful, but, if formalized into published curricula, might be impractical. Core computer science knowledge like data structures, algorithms, and automata theory are always in style, but keeping up with the cool, hip programming languages and tools is evidently a challenge for textbook authors and for the teachers who adopt their books. This has been true in college-level computer science academia, and I imagine it would be at least as true at the high school level. It might be easier to pick some reasonably current set of technologies, like maybe Python (24 years old), Subversion (14 years old), and GNU Emacs (30 years old), and assume that the ideas behind those technologies will still be relevant when the students graduate.

[How do college computer science graduates cope with having not been taught the most current languages and tools of the trade? Despite new things coming out at a frenzied pace, programming languages that actually see much real-world use are still catching up with the core ideas of languages like Lisp (57 years old) and ML (42 years old), while typically retaining a great deal of syntax in common with C (43 years old). Learning those three languages, or some other set of similar languages, gives abstract knowledge more than sufficient to pick up any language the software development industry is likely to throw at its practitioners for the foreseeable future. For that matter, you could probably still spend a fifty-year-long career writing code in nothing but C, if you tried!]

An understanding of computer systems ought to permeate much serious decision-making in society; while some of these students may indeed discover their vocational calling in a high school computer science class, even non-programmers should wield enough computer science knowledge to make sound decisions on election day, if nothing else. So cheers to all of the state education departments who are requiring the next generation to learn a bit of computer science! But at the same time, I wonder, after all of the K-12 computer science education groundwork that was laid in the 1980’s… why did this take so long?

The GNU C Reference Manual v0.2.4

The v0.2.4 release of The GNU C Reference Manual is now available:

This version incorporates a number of corrections recommended by readers, and the credits now more accurately reflect James Youngman’s generous contributions as co-author. As always, I welcome corrections and suggestions, sent to me at: tjr@gnu.org

Google Photos

DSCN0223.JPGThis week Google announced and released their new photo sharing service. I mostly use Flickr, but I had a handful of photo collections in Google Picassa years ago, which got dragged over into Google+ photos, and now have dutifully arrived in Google Photos.

As an overall interface for viewing photos, Google Photos seems nice, but not particularly better or worse than Flickr. There are options to share photos on Facebook, Twitter, and Google+, but I see no way to get various-sized photos to embed within web pages as I do with Flickr.

I also see no way to tag photos, but this might not be significant, as the facial, object, and location recognition built in to Google Photos is so accurate that it comes across frightening to this privacy advocate.

Facial recognition in my photo sample set is almost perfect. If the face is looking straight on, or is turned to the side, or is wearing a hat — doesn’t matter. Google Photos can pick out the face. It also correctly identified photograph locations including Boston, Washington D.C., Cedar Rapids, Omaha, Irvine, Joshua Tree National Park, and San Juan Capistrano, seemingly based on photographic content. (My ancient Canon 5D camera doesn’t have a GPS to embed location data, and my even more ancient Canon EOS-3 film camera certainly doesn’t embed location data!)

Object recognition was nearly as accurate, with a search for “food” including pictures of restaurants, pictures of food on a plate, and pictures of unpicked vegetables growing — though I was amused to see a picture of a live crab in an aquarium counted amongst “food”… not strictly incorrect, but unexpected.

The two main things that I do with photo sharing is to set up a place to store, share, and browse photos, and to embed them into web pages (such as this blog post). Google Photos does a fine job of the first set of tasks, but apparently not so great at the second task, so I will be sticking with Flickr for the time being.

The content recognition software behind Google Photos is outstanding, but might open a whole new can of worms in terms of reasonably expected privacy. Obviously, anyone sharing a photo in public would not expect privacy of the photo itself, but the fact that so much data can be automatically sucked out of the photo could easily give one pause. And it doesn’t really matter if your photos or stored on Google Photos or not, as Google can find and analyze photos from Flickr or from any public photo site.