WRITINGS

Digitize or Die

The James Reserve "Robot Forest" in 2005

July 1, 2020




As I approach my 70th year, I'm compelled to digitize my life. My granddaughter just turned 8, my son and daughter are in their late 30s, and any future interactions with them should be possible as a digital avatar once I'm gone.

Several of my colleagues have written their autobiographies, either humorous stories published in book format, or they have blogs. I prefer leaving an interactive copy of myself to tell my best stories. I don't expect it to be an easy task, having constructed my own computers and sensing gadgets to examine and experiment with nature since the late 1960s.

I enjoy figuring out new ways to digitize nature to amplify our understanding of the complexity of biological diversity. I've worked with interdisciplinary teams of faculty and students to design, engineer, and deploy new tools for remote sensing of ecosystems using specialized cameras, environmental sensors, and robots. The robots we designed were each specialized for different tasks. For example, our drones were capable of free-ranging flight surveys of vegetation and ponds, measuring water chemistry, collecting samples, and building complex maps. Robots on cables navigated through a forest canopy mapping the patterns of light that penetrated the trees and measuring carbon dioxide absorbed by plants. Transparent underground tubes containing free-wheeling microscope drones photographed the tiniest roots and their fungal symbiotes while adjacent soil probes measured temperature, moisture, and carbon. Swimming robots crisscrossed the open water surface of lakes and bays while measuring the water chemistry and conditions that lead to harmful algal blooms. From a swimming pool testbed to its final destination in French Polynesia, submarine robots and their sensor buoys measured seawater chemistry while simultaneously capturing remote sensing data about the coral reefs to document their health and susceptibility to bleaching.

We have learned a lot through digital tech, but the sense of urgency to digitize as much of nature as possible has motivated me the most. It has been an impossible task.

Now I'm retired, I have plenty of free time to reprocess my life's work to convert as much of the original data or its metadata into readable media, then transform those studies into interactive storytelling. Unfortunately, working against me has been the steady change in digital formats, so many digital files or media types older than ten years are nearly impossible to open or convert unless you saved the original hardware and software. I've also found that tools to back up files can be flawed, leaving unfortunate data gaps.

These are the aquatic sampling sensor buoys and networked info-mechanical robots in action at the James Reserve in 2006.

Thanks to my academic career, I have used many computers, including an IBM 360 mainframe, DEC PDP-11 minicomputer, and too many desktop PCs to count! With the evolution of programming languages, operating systems, file formats, and applications, I've found the technology learning curve steep but fun. The digital archive of my professional life contained the punch cards used in graduate school, backup tape drives, large floppy disks, then smaller ones. My first hard disc drive was a 10MB attached to an Apple IIe, then a 20MB drive in my first Mac, and 40MB in the Mac II. After that, there was a progression from 1GB to 500GB drives in the subsequent 10 years. My current SD card stores 256GB, while my portable USB-C external drives hold 5 TB of data… yet, they keep filling up! In addition, operating systems and file formats evolve, eventually becoming incompatible with older versions. On a Mac, for example, without knowledge about the resource fork, you cannot open a file from an old operating system with a new one.

My first foray into artificial intelligence came when a computer language called Lisp, inference engines, knowledge bases, and expert systems were innovative. Not the same as the machine learning approaches of today — expert systems were all about digesting domains of knowledge into collections of decision trees and then applying an inference system to resolve patterns — not unlike a mirror of our logical thinking process. The Apple Technology group, led by Alan Kay, had discovered my early work and seeded our lab with Macs and a software program called Smalltalk, written on top of Lisp. We also received beta copies of Bill Atkinson's newly written HyperCard. Without deep-diving into the computer science weeds, both programs are well-suited for modeling the hierarchical classifications common to plant and animal taxonomy. For example, with these tools, my students and I wrote programs to identify common wildflowers and their habitats using a visual question-and-answer approach that took on the metaphor of a nature walk.

The first version of the Macroscope simulated a guided naturalist walk through the ecosystems of the San Jacinto Mountain in California in 1987

We dubbed our project "The Macroscope," inspired by the science fiction novel of the same name by Piers Anthony. He conceived of a space-based telescope of infinite resolution in the space-time continuum. Our Macroscope included a geographic information system with satellite and aerial photographs, "street-view" style 360-degree panoramas of representative habitats. A species database was built around the close-up images of every species we encountered on our field trips to biological field stations and reserves worldwide. The goal was to link the "Big Data" of our digital multimedia database of ecosystems and species with a natural language AI that would simulate a human botanical or ecological expert. The typical dichotomous keys in field guides rely on the user learning specialized knowledge of botanical anatomy and require direct measurements of flowers and other plant parts. Our system incorporated the 54,000 photographs of natural scenes and close-ups of species stored on a large Laserdisc and controlled by the first color Apple Macintosh to take you on a virtual guided naturalist walk in many places around the world. It was useable by children and adults.

I'm holding the 16mm film Bolex movie camera we used on our first Venezuela field expedition to film the Macroscope ecology laserdisc in 1989.

Technical aspects of building our Macroscope in the late 1980s involved shooting a single frame 16mm film sequentially from a stationary point to record overlapping panoramic images in every direction. There were no hemispherical lenses in those days, so we used a specialized tripod head to photograph a calibrated overlap between frames as we moved the pan-head horizontally, then vertically. I photograph approximately 200 images in each panorama. 16mm film was then converted to a 1-inch DV videotape at a post-production facility in Burbank and finally written to a recordable Laserdisc. Next, we built FileMaker databases that send a command to the laserdisc player to search for a particular image. The resulting video image is processed by a frame grabber (a device for converting the electronic images from analog to digital. The last step inserts the new digital image into the appropriate database record that contains the metadata about the image: its geographic location, spatial position relative to adjacent images, the taxonomy of the species, habitat type, the people involved, and additional field notes.

An overview map of the clickable panoramic nodes that link the Quicktime VR "view maps" in the James Reserve Macroscope

Apple Computer provided a new programmable video format for us to test called Quicktime. First, we wrote custom XCMDs that could manipulate the video in ways that allowed individual images to become clickable maps to other images or display the precise color of the objects as a numerical value that could connect the feature to a database. Using this tool, one could explore a 360-degree panoramic image and display the name of every plant or animal identified by our expert system. Next, we wrote another bit of software to take the individual overlapping stills and merge them into a continuous panorama that was Quicktime VR compatible. QuickTime VR was the precursor to most of today's 2D VR experiences, such as Google Earth, Google StreetView, and YouTube VR.

The late 1980s were a very experimental time for digital image formats and cameras. Our field cameras were analog, 35mm film, 3/4 inch video, and the latest version called still video. When we worked in the rainforests of Venezuela, the Canon camera company loaned us one of their prototype professional still video cameras. TEAC loaned us a laserdisc video recorder that could store up to 54,000 images. The equipment enabled us to build an extensive image database of the rainforest, savanna, and alpine species while exploring Venezuela. We thought the still photos and videos were beautiful, but they were only 700 by 500 pixels, with 24-bit color! To manipulate an image down to a pixel required digitizing it from the video source. The thousands of images we collected were small and grainy by today's standards. The Macroscope was a substantial multi-year project that required expensive hardware and lots of training to set up and operate, let alone catalog and add new data. In addition, the AI interface was difficult to use. Today this project, its data, images, and text could easily fit on a 16GB thumb drive. The function of the Macroscope was a precursor to what would become mainstream features on everyone's smartphones in 30 years.

From interactive laserdisc to real-time sensors, the Macroscope evolved from this first conceptual design to a real-world laboratory of wireless sensors and robots "taking nature's pulse" from 2000- 2012. The National Science Foundation provided funding.

I participated in nearly one hundred digitally inspired conservation and technology projects throughout my career. I find the process daunting as I review them and attempt to convert whatever original data files I still have. There are heartbreaking gaps for me. The entire field summer of 1993, we documented wildlife in Namibia, Africa, mapping thousands of locations using a GPS-enabled Apple Newton. Sadly, this data was lost due to a hard drive failure and lack of a reliable backup (the Newton is now a non-functioning artifact). Only a handful of the MiniDV videotapes from that expedition remain. Thankfully, those were converted to QuickTime by a video transfer service, given that my camera to read those tapes is no longer around.

The majority of the digital data collected during our 1998 Namibia field expedition was lost due to an unfortunate drive failure and an inadequate back-up system. A few digital video files are all that remain. The quality of these 30-year-old videos makes me laugh, and then feel old!

Someday I'll have the digital assets at hand to program my avatar. The user interface is still up in the air. I've tried out several flavors of chatbots, and they hold some promise. My conversations with Siri on my HomePod are not that deep, even if he does a great job controlling my smart home appliances or recommending music. Siri is not an open system. Vector, the little robot, is a favorite of my granddaughter. I've been experimenting with its SDK, but the parent company Anki that houses the cloud for his conversational AI is closing down in September. That leaves DIY, so I signed up for a free Google Cloud account and began learning DialogFlow for conversational AI and TensorFlow to add machine learning to my stories' visual side. If a picture can tell a thousand words, wouldn't it also be helpful if a picture could explain itself?

Can I explain myself? Will my avatar? Time will tell.