Mighty Science in Defense of Nature

by Mike Hamilton

Digitize or Die

Soon I'll turn 65, so I better get on with digitizing my life. My granddaughter is 5, my son and daughter are in their 30s, and if plans work out, future interactions with them might include conversations with me as a digital avatar.


Several of my colleagues have written their autobiographies, either humorous stories published in book format, or they have blogged for a while. I prefer leaving an interactive copy of myself wired to know some of my best stories and answer questions. It won't be an easy task — I've been digitizing steadily since the late 1970s once I built my first computer and started measuring and mapping the outdoors using gadgets and widgets. You might say I have led a digitally abundant life.


As an explorer — a digital naturalist — I've been figuring out ways to digitize nature and amplify our understanding of the complexity of biological diversity. I've worked with interdisciplinary teams of faculty and students to design, engineer, and deploy new tools for remote sensing of ecosystems using specialized cameras, environmental sensors, and robots. The robots we've designed were each specialized for different tasks, capable of autonomous flight surveys of vegetation, collecting water samples, navigating along cables strung through the forest canopy, collecting microscopic images of root hairs and fungi, or floating and swimming as they measure water quality and assess coral reef health. We have learned a lot through the use of digital tech, but what has motivated me the most was the sense of urgency to digitize as much of nature as possible. It has been an impossible task.


Now I'm retired, I have a new urge to reprocess my life's work and recover as much of the original data, or its metadata, to re-purpose those studies into environmental story-telling, and hopefully, to entertain. Better yet, if my stories can motivate young people to carry on with our duty to protect and study nature — and protect ourselves. Working against me has been the rapid change in digital formats, so many digital files or media types older than ten years are nearly impossible to open or convert — that is unless you saved the original hardware and software. I've also found that tools to back up files can be flawed, leaving unfortunate gaps that become, well, extinct.


Thanks to my career at Cornell University, Cal Poly University Pomona, and the University of California, I have used many computers starting with an IBM 360 mainframe, DEC PDP-11 minicomputer, and too many desktop PCs to count! With the evolution of programming languages, operating systems, file formats, and applications, I've found the technology learning curve to be steep but fun. The digital archive of my professional life contained the punch cards used in graduate school, back-up tape drives, large floppy disks, then smaller ones. My first hard disc drive was a 10MB attached to an Apple IIe, then a 20MB drive in my first Mac, 40MB in the Mac II. There was a progression from 1GB to 500GB drives in the subsequent 10 years. My current SD card stores 256GB, while my portable USB-C external drives hold 5 TB of data… yet, they keep filling up! Operating systems and file formats evolve, eventually becoming incompatible with older versions. On a Mac, for example, without knowledge about the resource fork, you cannot open a file from an old operating system with a new one.


My first foray into artificial intelligence came when a computer language called Lisp, inference engines, knowledge bases, and expert systems were innovative. Not the same as the fuzzy-logic, Bayesian neural-net machine learning approaches of today — expert systems were all about digesting domains of knowledge into collections of if-then rules, or decision-trees and then applying an inference system to resolve patterns — not unlike a mirror of our logical thinking process. The Apple Technology group, led by Alan Kay, had discovered my early work and seeded our lab with Macs and a software program called Smalltalk, written on top of Lisp. We also received beta copies of Bill Atkinson's newly written HyperCard. Without deep-diving into the computer science weeds, both programs are well-suited for modeling the hierarchical classifications common to plant and animal taxonomy. With these tools, my students and I wrote programs to identify common wildflowers and their habitats using a visual question-and-answer approach that took on the metaphor of a nature walk.


We dubbed our project "The Macroscope," inspired by the science fiction novel of the same name by Piers Anthony, who conceived of a space-based telescope of infinite resolution in the space-time continuum. Our Macroscope included a geographic information system with satellite and aerial photographs, "street-view" style 360-degree panoramas of representative habitats, and a species database coupled to close-up images of as many of the plants and animals we could identify on our field trips to biological field stations and reserves around the world. The goal was to link the "Big Data" of our digital multimedia database of ecosystems and species with a natural language AI that would simulate a human botanical or ecological expert. The typical dichotomous keys in field guides rely on the user learning specialized knowledge of botanical anatomy and require direct measurements of flowers and other plant parts. Our system incorporated the 54,000 photographs of natural scenes and close-ups of species stored on a large laserdisc and controlled by the first color Apple Macintosh to take you on a virtual guided naturalist walk in many places around the world. It was useable by children and adults.


Technical aspects of building our Macroscope in the late 1980s involved shooting a single frame 16mm film sequentially from a stationary point to record overlapping panoramic images in every direction. There were no hemispherical lenses in those days, so we used a specialized tripod head that allowed us to photograph a calibrated overlap between frames as we moved the pan-head horizontally, then vertically. I photograph approximately 200 images in each panorama. 16mm film was then converted to 1-inch DV videotape at a post-production facility in Burbank and finally written to a recordable laserdisc. On the computer side, we built FileMaker databases that sends a command to the laserdisc player to search for a particular image, then feed the analog video image signal through a frame grabber (a device for converting electronic images from analog-to-digital), finally inserting the new digital image into the database record that contained the technical information about the image, its geographic location, its spatial position relative to adjacent images, the common and scientific names of any species in the frame, the habitat classification, names of students involved, additional field notes, date and the time.


Apple provided a new video format for us to test called Quicktime, and so we wrote extensions to the format called XCMDs that could manipulate the video in ways that allowed individual images to become clickable maps linking to other images, spectral data read, then used to connect to a database or classification program. Using this tool, I could hover the cursor over a flower and automatically read the RGB spectral data of a region of the plant to cross-reference a database file and narrow the search to one or a few possibilities. I wrote another tool to take the individual overlapping stills and merge them into a continuous panorama that was Quicktime VR compatible. QuickTime VR was the precursor to most of today's 2D VR experiences, such as Google Earth, Google StreetView, and YouTube VR.


The late 1980s were a very experimental time for digital image formats and cameras. Our field cameras were analog, 35mm film, 3/4 inch video, and a new format called still video. When we worked in the rainforests of Venezuela, the Canon camera company loaned us one of their prototype professional still video cameras, and TEAC loaned us a laserdisc video recorder that could store up to 54,000 images. The equipment enabled us to build an extensive image database of the rainforest, savanna, and alpine species while we explored Venezuela. We thought the images were beautiful, but they were only 700 by 500 pixels, with 24-bit color! To manipulate an image down to a pixel required digitizing it from the video source. The thousands of images we collected were small and grainy by today's standards. The Macroscope was a substantial multi-year project, required expensive hardware and lots of training to set up and operate, let alone to catalog and add new data. The AI interface was difficult to use. Today the entire project data, images and text, easily fits in a 16GB thumb drive. The function of the Macroscope was a precursor to what would become mainstream features on everyone's smartphones in 30 years.


Throughout my career, I took part in nearly one hundred digitally inspired conservation and technology projects. As I review them and attempt to convert whatever original data files I still have, I'm finding the process to be a daunting project. There are heartbreaking gaps for me — for example the entire field summer of 1993, where I digitally documented wildlife in Namibia, Africa, including field mapping of thousands of geolocations using a GPS linked Apple Newton, is irretrievable 20 years later because of hard drive failure and lack of a reliable back-ups (not to mention the Newton is now a non-functioning artifact). Only a handful of the MiniDV videotapes from that expedition remain, and thankfully those were to QuickTime by a video transfer service, given my camera to read those tapes is no longer around.


In about a year, I'll have the digital assets at hand to program my avatar. The user interface is still up in the air. I've tried out several flavors of chat-bots, they hold some promise. My conversations with Siri on my HomePod are just not that deep, even if he does a great job controlling my smart home appliances or recommending music. Siri is not an open system. Vector the little robot is a favorite of my granddaughter, I've been experimenting with its SDK, but the parent company Anki that houses the cloud for his conversational AI is closing down in September. That leaves DIY, so I signed up for a free Google Cloud account and begun learning DialogFlow for the conversational AI, and TensorFlow to add machine learning into the visual side of my stories. If a picture can tell a thousand words, wouldn't it also be helpful if a picture can explain itself?


Can I explain myself? Will my avatar? Time will tell.



You can also read this story on MEDIUM

The James Reserve "Robot Forest" in 2005

These are the aquatic sampling sensor buoys and networked info-mechanical robots in action at the James Reserve in 2006.

The first version of the Macroscope simulated a guided naturalist walk through the ecosystems of the San Jacinto Mountain in California in 1987

I'm holding the 16mm film Bolex movie camera we used on our first Venezuela field expedition to film the Macroscope ecology laserdisc in 1989.

An overview map of the clickable panoramic nodes that link the Quicktime VR "view maps" in the James Reserve Macroscope

From interactive laserdisc to real-time sensors, the Macroscope evolved from this first conceptual design to a real-world laboratory of wireless sensors and robots "taking nature's pulse" from 2000- 2012. The National Science Foundation provided funding.

The majority of the digital data collected during our 1998 Namibia field expedition was lost due to an unfortunate drive failure and an inadequate back-up system. A few digital video files are all that remain. The quality of these 30-year-old videos makes me laugh, and then feel old!

August 28, 2019