Reflections on AI and Computing 25 Years Ago

Technology

In 1998, almost on a whim, I enrolled in a master’s degree in Information Science at the University of North Carolina at Chapel Hill. I was pleasantly surprised to be accepted. My British undergraduate degree translated favorably in terms of credits, despite having just scraped through by the skin of my teeth as a 21-year old twelve years earlier.

I had spent the previous 2.5 years running a home daycare and speaking to babies and toddlers in words of (generally) no more than two syllables, so it was culture shock. And it was thrilling… like stepping through a trap door to another world. I had never been on the Internet before and I had certainly never heard the term artificial intelligence.

At that time the World Wide Web was growing explosively and exponentially, and in an attempt to make all this information discoverable, search engines were sprouting like weeds.

One task my networking class was given was to beta-test one of these new search engines. Beta-testing occurs in real world situations with real world users, before a piece of software is officially released to the public. The instruction from the professor was “See if you can break this!” The search engine had a strange name… Google. This was apparently a deliberate misspelling of the word googol, which is a very large number (1 with a hundred zeroes following).

Information Science (IS) as a field sits on the interfaces between computers and humans. This includes how information (which is what data becomes when it is organized) is stored, organized, managed, cleaned, labeled, disseminated, and retrieved. IS looks at how humans use computers and what they do (and want to do) with computing technology, hardware and software. It aims to improve the interfaces that humans use to access information, making them intuitive, ergonomic, and efficient.

I scheduled my classes around my duties as a mother, as I had a 3-year old and a 5-year old at home. My husband switched to an evening shift so he could be home during the day and we tag teamed.

I became immersed in a world of acronyms and computer languages. I learned to code: C++, Unix, HTML, Javascript, ColdFusion, SQL, ASP. I learned how computers and networks communicate and memorized each layer in the OSI stack (the mnemonic being Please Do Not Throw Sausage Pizza Away). I learned how to build a search engine. I went on a field trip to the AT&T human-computer interaction lab and got to use eye-tracking software. I published my first article in an academic journal.

I became interested in relational databases, data retrieval and data mining. The questions being asked in classes centered around topics of using metadata (data about data) to describe information so that we can more easily retrieve it. How can we improve the results from a search engine query so they more closely match the users’ expectations? How do we accurately describe images in words so we can retrieve them with a search? Google Images was still a couple of years in the future (launched in 2001).

I learned about natural language processing (NLP). One of the NLP programming languages was called Lisp, which I still find strangely hilarious. These days NLP allows us to query a search engine using everyday language, e.g. “Why do some songs evoke negative emotions?” rather than fusing key words, quotation marks, and Boolean operators (AND, OR, NOT) as we had to do in the past to get meaningful answers. Eventually, NLP formed the basis for Google Translate, launched in 2006.

I was intrigued by the concept of neural networks — machine learning models of interconnected nodes that aimed to mimic the functions of the human brain — and I wanted to learn more. These were the technologies of artificial intelligence, and sought to enable machines to perform tasks typically done by humans as well or better than humans. Such tasks included the ability to learn from experience, understand language as it was naturally spoken, to recognize patterns in data, and to use this information in making decisions.

Back then, I heard a memorable quote by Father John Culkin (sometimes misattributed to Marshall McLuhan). Today this seems even more pertinent, given how pervasive our tools have become, and eminently applicable to generative AI:

“We shape our tools and, thereafter, our tools shape us.”

A year after I earned my MSIS degree in 2000, I moved from North Carolina to Scotland with my family to accept a job as a developer at the Learning Technology Section of the College of Medicine and Veterinary Medicine at Edinburgh University. The plan was to be there for three years and then return to NC. My new job started the day before 9/11.

I was creating online learning modules for medical students in genetics and other topics, and developing the Edinburgh Electronic Medical Curriculum (EEMeC), which won a Queen’s Anniversary Prize in 2005. I was moderating student online discussion and became fascinated by the different ways in which students were making use of the discussion boards we had provided them with.

As I was coding every day, it was no surprise to me that I started dreaming in code. Coding, while logical and detail-oriented, can also be a highly creative activity.

Still intrigued by AI, as well as the appealing idea of staying longer than three years in Edinburgh, I started looking into doing a master’s degree there. Edinburgh University has a world-renowned AI department. It was an exciting period in AI because of the vast amounts of data being generated. Also AI was moving to inductive reasoning, meaning that it could start to make predictions based on patterns in data.

But in December 2002 the School of Informatics was ravaged by a fire in Edinburgh’s Old Town, a World Heritage Site. The fire destroyed thirteen historic buildings, including the complete AI library within the school that had been carefully created and curated by librarian Olga Franks over several decades. 5,000 books, 800 journals and 35,000 research papers were destroyed.

I decided then, that I would return to North Carolina as planned, and to apply to a PhD program. I chose educational psychology as my field so that I could explore the pedagogy of online learning. Unfortunately the field of education was light years behind information science in terms of technology use. I was working with large data sets doing content analysis, which meant hand coding text, for months upon months, which I found strangely absorbing.

Midway through my PhD I trained as a hypnotherapist and that has taken me down a different career path entirely. It has been about 13 years since I wrote any code — I miss the challenge of it, although these days AI can do much of it and anyway my skills are obsolete.

I stopped thinking about AI until ChatGPT popped up at the end of 2022. My brain lit up with excitement and I jumped eagerly down the rabbit hole to see what it could do.

I find it slightly amusing to hear all the buzz, the hype, and the concerns about AI now, although these are valid considerations. For many people AI has only just come onto their radar at all.

I can’t help but wonder what I would have been doing now if I had applied to that AI master’s program in 2003 after all. I asked ChatGPT that question, and here’s what it told me:

Having specialized in NLP back then, you’d likely be a part of the growing AI industry today. The theoretical background you’d have would allow you to contribute to understanding how large language models like GPT-3 or BERT work. You’d probably be leading research on how to incorporate more human-like understanding of language into AI. If you pursued an academic career, you could be a leading figure in computational linguistics, working on semantic understanding, text generation, or multilingual NLP.

Leave a Reply

Your email address will not be published. Required fields are marked *