For my holiday vacation I was looking for something light, fun, and informative to read, and AIQ, by Nick Polson and James Scott, did not disappoint. The authors clearly state in the beginning that there are many public discussions which must be had about new AI technologies, and that this book is not here to provide answers for those debates. The purpose of this book is to help people gain the level of understanding where they can meaningfully contribute to the debate. True to its word, I came away feeling more informed on the topic of artificial intelligence and machine learning. The book actually rekindled my interest in AI and has encouraged me to seek out other books on the topic. At about 240 pages, it is a quick read. I recommend this book for anyone with an interest in technology looking for an entertaining yet informative read.
When it comes to discussing the role artificial intelligence plays, and will play, in our society, people tend to fall into three categories: evangelists, alarmists, and realists. The AI evangelists (a group which I used to belong to) tend to see the proliferation of decision making algorithms as a universal good. They will underplay, overlook, or sometimes even ignore flaws and drawbacks related to AI technologies. They want all of the stops pulled out as we rush towards the singularity. The alarmists are the opposite of the evangelists. The alarmists see nothing but doom and gloom, ranging from corporate dystopias to outright terminator style apocalypses. They seek strict limitation, regulation, and even complete bans of many new AI technologies. The realists (the group I now see myself as a member of) land somewhere in the middle of these other two. The realists welcome the positive benefits of new technologies but also acknowledge the dangers they brings along with them. This book falls squarely in the realist category.
Each chapter in the book uses a historical figure/event to introduce the context for the technique for each chapter. The chapter transitions smoothly from the historical context to modern applications and ends with postscript giving some final thoughts for the chapter. The book has chapters focusing on conditional probability/suggestion engines, model fitting/linear regression, Bayes’ rule, language processing, and anomaly detection. The book then finishes off with two chapters focusing on the benefits and pitfalls of using or not using AI in society: One chapter looks at the opportunities being missed for AI in the healthcare sector while the other chapter explores how poor assumptions can be amplified with disastrous results.
One chapter that really stuck out to me was the chapter focusing on language processing. The historical portion of the chapter follows Grace Hopper (whose life is worth reading about in depth) and her efforts in language processing. The chapter discusses a model called Word2Vec, which converts words into vectors. Here a vector can be thought of as some point in a many-dimensional space, where words that have similar meanings or contexts are near to each other in the space. These vectors can be added or subtracted to answer questions about context. The concept was appealing to me since it demonstrated how a hard problem (natural language processing) could be shifted into a mathematical context (finite-dimensional vector spaces) that I am more comfortable with. If you have never heard of a vector before, you can think of them as being like arrows in space, which have a length and direction. By putting these arrows together, we can create new arrows.
In terms of math or programming prerequisites, this book is very accessible. If you have seen some basic algebra and have used a computer before, you should be good to go to enjoy this book. The authors do an excellent job of making every topic intuitive and understandable with clear explanations and well-crafted illustrations. The book would make a great gift for anyone interested in computer science, mathematics, or technology.