My knee-jerk reaction to new things is usually one of apprehension. It’s an aspect of myself that I’m not especially proud of, and so I try to take an extra beat when confronted with something new, in the interest of fairness. But even with that extra beat, I’m really wary of Microsoft and Google’s new AI-driven search announcements.
Lots of folks are already calling out the big problems with this: decreased traffic to original content publishers; bad information presented as authoritative; even climate change implications. The thing that really gives me the heebie-jeebies about it, though, is The Algorithm. We already live in a world where we’re constantly being funneled content and information that some third-party set of criteria thinks we should want, based on what it’s gleaned about us from the behavior it can see. It’s spread through social media (lookin’ at you, TikTok) and through news aggregators. But the idea that we’ll start being confronted with it when we’re searching for new information, as opposed to passively consuming? That’s kinda terrifying.
Search engines succeed when they can correctly identify searcher intent—that is, when they know what information you want to find and can give it to you, while being flexible about the actual terms you use. Human communication works this way a lot of the time, and based on the sheer number of variables and complex interpretation involved, it makes sense to introduce AI into the equation. But when we do, it will be critically important to separate searcher intent from confirmation bias. “Serving you the information you’re looking for” treads perilously close to “serving you the interpretations that you’re looking for,” and in all the same ways we’re currently battling around Facebook et al, that would create dangerous echo chambers.
From a research standpoint, too, it gets hairy. Curation has always been an important question and an important responsibility for archivists and librarians, but the prevailing ethos seems to be “openness.” With AI doing the curating, I know I’d want extreme transparency around how it chooses which sources to promote and which not to. I don’t like the idea that I’m not making my own decisions about which information sources I can access. And while I imagine that AI search engines aren’t intended to be the only source of information, it won’t take much for them to become the first and easiest stop. If it becomes increasingly burdensome to peek behind the curtain and do your own investigation, I fear we’ll lose a lot: research skills, critical evaluation, diversity of viewpoints…
Anyway, I’m veering into “old man yells at cloud” territory, and like I said up top, I don’t want to be a reflexive naysayer. But I don’t love the implications, and I think it’s going to be important going forward to keep an eye on the openness of information channels, and to not lose sight of the need for a diversity of ways to get our answers.