Thursday, January 30, 2025

Facts v. Speculation

Case Studies in the News

As individuals and news outlets report on the tragic mid-air collision of American Airlines Flight 5342 and a military helicopter, very different responses help to illustrate the accuracy and reliability of information cited.

Facts

Source: Associated Press -- "Skaters Jinna Han and Spencer Lane were among those killed, along with their mothers, and coaches Evgenia Shishkova and Vadim Naumov, said Doug Zeghibe, CEO of the Skating Club of Boston, during a Thursday news conference."

Noteworthy in this report is the presence of names that may easily be fact-checked. Proper nouns and numbers are excellent terms for investigative searching, as they may be corroborated--or not--by other sources.

Speculation

Source: CBS News -- "Asked directly how he came to the conclusion that diversity had something to do with the crash, Mr. Trump replied, 'because I have common sense.'"

Noteworthy in this report is the lack of evidence cited and in its place the role of common sense. Common sense may seem trustworthy to the person who cites it, but there are many examples when common sense fails to foresee or prevent unwanted results. Furthermore, there is no way to fact check a personal belief about common sense other than to trust the person responsible or doubt that common sense is always right.

Monday, April 29, 2024

Become a Host Site

 


Your organization may now purchase resources and tools that we developed over the past 20+ years to strengthen information fluency. If you've found our live search challenges, keyword challenges tutorials (how to query, evaluate information and avoid plagiarism) and citation wizards useful with your students (and staff) you can keep them alive on your own site.

A few assets have been removed, but most of our site is still up and running, mainly because potential partners are considering which assets they want to host on their own sites.
 
If you also have an interest in obtaining the rights to materials we created, please write to Carl Heine, managing partner at carl@21cif.com.
 
More info and prices here: Product Information

Tuesday, March 26, 2024

The end of an era: Information Fluency is closing

 

On April 25, 2024, 21st Century Information Fluency will close its site. After 23 years, having supported countless librarians and teachers with resources for navigating the fast-moving waters of the Internet and helping students find, evaluate and use information ethically, we will no longer be accessible.

If your institution is interested in in acquiring any of our training resources or tools (e.g., Citation Wizards, MicroModules, Search Challenges, etc.) please contact us to find out more: carl@21cif.com

Saturday, March 25, 2023

A first look at Google's Bard AI Search Engine


I recently signed up to try Bard, Google's new AI search engine. As the site says, Bard is still in its experimental stage and won't necessarily find the right answers. This disclaimer may have been prompted by the embarrassing mistake Google made when they published Bard's now famous inaccurate answer to a space telescope query that precipitated a billion dollar market devaluation for Google.

So, as an experiment on the experimental platform, I entered a classic search challenge: "How many buffalo are there today in North America?" (I didn't place quotes around the query.) The new AI platform should be proficient in parsing the meaning which isn't tricky, except that a better term for buffalo is bison, which Google quickly corrected.

The first result was reasonable sounding: 400,000 bison in North America. This was accompanied by a description of bison. Something missing, however, was the citation. I could not tell from where Google had gathered this information. For anyone doing research, that is a big omission--making it impossible to fact check details from the source.

As I looked for a possible source, I clicked the New Response button. To my surprise, Google served up a different answer with no mention of a source: 1.5 million bison. I tried it a third time: 200,000 bison in North America. Fourth time: 500,000.

Third Query
Clicking 'View other drafts' produced other numbers.

Of course, the question is "Which number is right?" They can't all be.

These results are essentially the same as entering the query in regular Google and looking at the first page of results. The numbers are all over the place. To determine which has sufficient credibility, one needs to look at the source, the publication date and what organizations link to the information.

Practically speaking, it may not be possible to determine the best number of bison. That is why the recommendation for using information is to cite the source (according to... the number is...). Bard doesn't make that possible (yet). Let's hope the developers behind Bard see the benefit of providing source details as they continue to refine it.



Thursday, February 16, 2023

At a Crossroads? The Intersection of AI and Digital Searching


Microsoft's foray into next generation searching powered by Artificial Intelligence is raising concerns.

Take, for example, Kevin Roose, a technology columnist for The New York Times, who has tried Bing and interviewed the ChatGPT bot that interfaces with Bing. He describes his experience as "unsettling." (Roose's full article here). 

Initially, Roose was so impressed by Bing's new capabilities he decided to make Bing his default search engine, replacing Google. (It should be noted that Google recognizes the threat to its search engine dominance and is planning to add its own AI capabilities.) But a week later, Roose has changed his mind and is more alarmed by the emergent possibilities of AI than the first blush of wonderment produced by AI-powered searching. He thinks AI isn't ready for release or people aren't ready for AI contact yet.

Roose pushed the AI, which called itself 'Sydney,' beyond what it was intended to do, which is help people with relatively simple searches. His two hour conversation probed into existential and dark questions which made him "unable to sleep afterwards." Admittedly, that's not a normal search experience. Microsoft acknowledged that's why only a handful of testers have access to its nascent product at the moment.

All this gives a feeling we are soon to be at a crossroads and what we know about search engines and strategies is about to change. How much isn't certain but there are already a couple warnings:

  • AI seems more polished than it is. One of the complaints from testers like Roose is that AI returns "confident-sounded" results that are inaccurate and out-of-date. A classic in this regard is Google's costly mistake of publishing an answer generated by its own AI bot (known as Bard) to the question, "what telescope was the first to take pictures of a planet outside the earth's solar system?" Bard came back with a wrong answer, but no one at Google fact-checked it. As a result, Google's parent company Alphabet lost $100 billion in market value. (source)
  • AI makes it easier to use natural language queries. Instead of the whole question about the telescope in the bullet above, current search box strategy would suggest TELESCOPE FIRST PLANET OUTSIDE "SOLAR SYSTEM" is just as effective as a place to start. Entering that query in Google, the top result is from a NASA press release on Jan 11, 2023 which doesn't exactly answer the question, but is probably why Bard decided that it did. Apparently AI takes a very human leap to thinking it found the answer to the question when, in fact, the information answers a different question: "what telescope was the first to confirm a planet's existence outside the earth's solar system?" This demonstrates one of the five problems students have with searching: misunderstanding the question. AI isn't ready yet to take care of that problem.

There's much more to come on this topic.

Tuesday, February 14, 2023

New: GUIDED Search Challenges

I realized not long ago that TIMED search challenges were out-of-step with my current thinking about information fluency. 

Being fluent doesn't mean locating the "right" answer everytime, or on the first attempt or as fast as possible. A timed challenge puts pressure on the searcher, but this is not how it is in the real world. What matters when one is trying to find information that 1) is not yet known and 2) is in a place that is still unknown is being able to locate it, even after multiple failures. That can still be fluency. 

As a result, the previous 7 Timed Search Challenges have been archived--they are still available--and a new format has be introduced. Instead of unlimited attempts, now one gets 5 tries, each time with an expert search hint to guide the process. 

Search challenges like these are not intended for purposes of evaluation, but learning: learning to think like a digital researcher who is fluent with a variety of search box strategies. 

Give them a try! Some are familiar and some are new. There are now 8 Guided Search Challenges, followed by 8 more in a series called Needle and Haystack.  

Guided Search Challenges

Monday, January 30, 2023

Guided Search Challenges

Taking a lesson from my last post, I refreshed the Needle and Haystack Challenge series I created a couple years ago on the Information Fluency site. I realized that the "game" didn't teach much about search strategy. Instead, it was focused primarily on language skills. 

Over the weekend I refreshed my earlier work to embed search hints instead of having students try to figure out mystery clues that would guide them to the right information. In the process, I replaced the Identity Challenge with a new one that reinforces the keyword selection process instead of selecting the right database to search. The Identity Challenge, trying to find the unidentified author of an image, would be better as part of a series on knowing WHERE to search, not WHAT WORDS to use.

There are four search challenges in the current set:

  • ACORN -- finding the name of an obscure part of an acorn
  • INTRUDERS -- finding the first known instance of a wall that failed to keep out intruders
  • HAUNTED-HIKE -- finding the location of a hike reputed to be one of the most haunted places
  • RECLAMATION -- finding out the budget for a massive land reclamation project in Singapore

Each one is worth up to 5 points. The scoring follows the 1-in-5 Rule: on average, you have a 1 in 5 chance of using the same keywords on your first search as the person who wrote the information you are looking for. Find the answer to a challenge on the first try and you earn 5 points. If you take more than 5 tries, you earn nothing but we explain the answer. Along the way, search hints are provided that an expert researcher might use.

Curious? Give it a try. It's a free tool to help students test their ability to find better keywords. It also reinforces the practice of looking for better words in search results when the information there doesn't answer your question. 

Needle and Haystack Challenge