Saturday, March 25, 2023

A first look at Google's Bard AI Search Engine


I recently signed up to try Bard, Google's new AI search engine. As the site says, Bard is still in its experimental stage and won't necessarily find the right answers. This disclaimer may have been prompted by the embarrassing mistake Google made when they published Bard's now famous inaccurate answer to a space telescope query that precipitated a billion dollar market devaluation for Google.

So, as an experiment on the experimental platform, I entered a classic search challenge: "How many buffalo are there today in North America?" (I didn't place quotes around the query.) The new AI platform should be proficient in parsing the meaning which isn't tricky, except that a better term for buffalo is bison, which Google quickly corrected.

The first result was reasonable sounding: 400,000 bison in North America. This was accompanied by a description of bison. Something missing, however, was the citation. I could not tell from where Google had gathered this information. For anyone doing research, that is a big omission--making it impossible to fact check details from the source.

As I looked for a possible source, I clicked the New Response button. To my surprise, Google served up a different answer with no mention of a source: 1.5 million bison. I tried it a third time: 200,000 bison in North America. Fourth time: 500,000.

Third Query
Clicking 'View other drafts' produced other numbers.

Of course, the question is "Which number is right?" They can't all be.

These results are essentially the same as entering the query in regular Google and looking at the first page of results. The numbers are all over the place. To determine which has sufficient credibility, one needs to look at the source, the publication date and what organizations link to the information.

Practically speaking, it may not be possible to determine the best number of bison. That is why the recommendation for using information is to cite the source (according to... the number is...). Bard doesn't make that possible (yet). Let's hope the developers behind Bard see the benefit of providing source details as they continue to refine it.



Thursday, February 16, 2023

At a Crossroads? The Intersection of AI and Digital Searching


Microsoft's foray into next generation searching powered by Artificial Intelligence is raising concerns.

Take, for example, Kevin Roose, a technology columnist for The New York Times, who has tried Bing and interviewed the ChatGPT bot that interfaces with Bing. He describes his experience as "unsettling." (Roose's full article here). 

Initially, Roose was so impressed by Bing's new capabilities he decided to make Bing his default search engine, replacing Google. (It should be noted that Google recognizes the threat to its search engine dominance and is planning to add its own AI capabilities.) But a week later, Roose has changed his mind and is more alarmed by the emergent possibilities of AI than the first blush of wonderment produced by AI-powered searching. He thinks AI isn't ready for release or people aren't ready for AI contact yet.

Roose pushed the AI, which called itself 'Sydney,' beyond what it was intended to do, which is help people with relatively simple searches. His two hour conversation probed into existential and dark questions which made him "unable to sleep afterwards." Admittedly, that's not a normal search experience. Microsoft acknowledged that's why only a handful of testers have access to its nascent product at the moment.

All this gives a feeling we are soon to be at a crossroads and what we know about search engines and strategies is about to change. How much isn't certain but there are already a couple warnings:

  • AI seems more polished than it is. One of the complaints from testers like Roose is that AI returns "confident-sounded" results that are inaccurate and out-of-date. A classic in this regard is Google's costly mistake of publishing an answer generated by its own AI bot (known as Bard) to the question, "what telescope was the first to take pictures of a planet outside the earth's solar system?" Bard came back with a wrong answer, but no one at Google fact-checked it. As a result, Google's parent company Alphabet lost $100 billion in market value. (source)
  • AI makes it easier to use natural language queries. Instead of the whole question about the telescope in the bullet above, current search box strategy would suggest TELESCOPE FIRST PLANET OUTSIDE "SOLAR SYSTEM" is just as effective as a place to start. Entering that query in Google, the top result is from a NASA press release on Jan 11, 2023 which doesn't exactly answer the question, but is probably why Bard decided that it did. Apparently AI takes a very human leap to thinking it found the answer to the question when, in fact, the information answers a different question: "what telescope was the first to confirm a planet's existence outside the earth's solar system?" This demonstrates one of the five problems students have with searching: misunderstanding the question. AI isn't ready yet to take care of that problem.

There's much more to come on this topic.

Tuesday, February 14, 2023

New: GUIDED Search Challenges

I realized not long ago that TIMED search challenges were out-of-step with my current thinking about information fluency. 

Being fluent doesn't mean locating the "right" answer everytime, or on the first attempt or as fast as possible. A timed challenge puts pressure on the searcher, but this is not how it is in the real world. What matters when one is trying to find information that 1) is not yet known and 2) is in a place that is still unknown is being able to locate it, even after multiple failures. That can still be fluency. 

As a result, the previous 7 Timed Search Challenges have been archived--they are still available--and a new format has be introduced. Instead of unlimited attempts, now one gets 5 tries, each time with an expert search hint to guide the process. 

Search challenges like these are not intended for purposes of evaluation, but learning: learning to think like a digital researcher who is fluent with a variety of search box strategies. 

Give them a try! Some are familiar and some are new. There are now 8 Guided Search Challenges, followed by 8 more in a series called Needle and Haystack.  

Guided Search Challenges

Monday, January 30, 2023

Guided Search Challenges

Taking a lesson from my last post, I refreshed the Needle and Haystack Challenge series I created a couple years ago on the Information Fluency site. I realized that the "game" didn't teach much about search strategy. Instead, it was focused primarily on language skills. 

Over the weekend I refreshed my earlier work to embed search hints instead of having students try to figure out mystery clues that would guide them to the right information. In the process, I replaced the Identity Challenge with a new one that reinforces the keyword selection process instead of selecting the right database to search. The Identity Challenge, trying to find the unidentified author of an image, would be better as part of a series on knowing WHERE to search, not WHAT WORDS to use.

There are four search challenges in the current set:

  • ACORN -- finding the name of an obscure part of an acorn
  • INTRUDERS -- finding the first known instance of a wall that failed to keep out intruders
  • HAUNTED-HIKE -- finding the location of a hike reputed to be one of the most haunted places
  • RECLAMATION -- finding out the budget for a massive land reclamation project in Singapore

Each one is worth up to 5 points. The scoring follows the 1-in-5 Rule: on average, you have a 1 in 5 chance of using the same keywords on your first search as the person who wrote the information you are looking for. Find the answer to a challenge on the first try and you earn 5 points. If you take more than 5 tries, you earn nothing but we explain the answer. Along the way, search hints are provided that an expert researcher might use.

Curious? Give it a try. It's a free tool to help students test their ability to find better keywords. It also reinforces the practice of looking for better words in search results when the information there doesn't answer your question. 

Needle and Haystack Challenge

Tuesday, January 17, 2023

How I failed an Information Literacy Assessment

 I often "check out the competition" so to speak. This time it was NorthStar, a St. Paul, MN-based literacy company that offers assessments covering a range of topics from information literacy to operating systems, software packages and career search skills.

Their information literacy assessment consists of 32 performance-based and multiple choice items woven around the stories of three individuals involved in information literacy tasks. It's quite easy to take the assessment, assisted by audio storytelling. I thought I did pretty well and then I got a report at the end informing me I had failed with a 74% accuracy rate.

So I took the assessment again.

Not all the items seem specifically linked to what I'd call information literacy. Several depend on having lived circumstances similar to the case studies.  I did fine on these, having experienced financial deprivation, for example. Nonetheless, answers that might make sense are counted wrong if they violate an implicit principle such as 'don't go deeper into debt by taking out a loan if you are already in debt.' That lesson has to be learned by reading or listening to sage advice or the hard way, by accumulating debts. It's not an information literacy skill, yet it is assessed as one.

Another item resembles an information literacy skill, knowing for what to search. Provided with a list of criteria for finding a job, the task essentially is to click synonyms that match the criteria. Research demonstrates that this is one of the key failures that students make when searching: knowing what to search for. However, the assessment uses these as indicators to tell if and when one finds matching information. Knowing how to find answers in the first place is usually the real challenge and where students tend to stumble.

Among other items that seem removed from information literacy are project management, reading, a basic understanding of careers in healthcare.  Without a doubt information literacy depends on fundamental skills like knowing a language well enough to use it, thinking methodologically, being persistent, learning from failures and a host of others. But these are all primary skills and dispositions. Information literacy is a secondary skill that builds on them. If a student fails in such primary tasks, the solution is not information literacy training.

The assessment does contain some good examples of information literacy:

  • identifying optimal keywords that match one's search criteria
  • Distinguishing between ads and other content
  • How to use search engine filters
  • Knowing how to read results
  • Knowing how to navigate a Web page
  • Knowing where to search for relevant information
  • Evaluating the "fit" of information found

The second time I took the assessment I was more careful and I passed. I still missed three items, though I don't consider them fundamental to information literacy.

Questions that remain:

  • Is knowing how to create a spreadsheet or how to bookmark a page an information literacy skill?
  • In what ways are information literacy or fluency skills distinct from computer or software proficiencies? One answer to this is the Digital Information Model found here.
  • What is a passing score for information literacy? When I failed with a 74% the first time and passed the second time with 87% it reminds me that a numerical cutoff for this cluster of secondary skills is really hard to justify. No one performs at 100% all the time as an effective, efficient, accurate and ethical consumer of online information. We strive to be better than 50%, however. That's why the threshold is set low on our assessments and 75% is considered mastery. That number is borne out in search results from our studies. Being right 3 out of 4 times is a pretty decent accomplishment in the online Wild West.

Thursday, November 24, 2022


In today's "Information Fluency/Literacy" search feed, I found this article:

Students create content to fight disinformation, revive media trust

I've always valued students creating content, not just curriculum writers. As a curriculum author, it's easy to create what one thinks will grab students' attention and result in learning. But experience has taught me that giving projects to students to complete is hard to beat in terms of attention-getting and self-directed learning. For that reason, I applaud the Out of the Box Media Literacy Initiative for their efforts establishing a contest inviting students to answer pressing questions about disinformation, hate speech, and media distrust. 
 
To participate in the contest, students prepared 90-second original videos. Here are the guidelines:
  • 1st Category: High school students
    How should a media and information literate individual address fellow citizens who are misinformed, hateful, or discriminatory?
  • 2nd Category: College students
    How can media and information literacy help in reviving public trust lost in the media due to disinformation and hate speech?

The winning submission in the high school category emphasized "the duty to promote a culture of critical thinking combined with compassion. 'While you come across many who are ignorant, take a moment to not only remind them, but yourself of your intentions. Engage, not isolate. Encourage, not demoralize.'” (Allen Justin Mauleon, 2022)

Watch the video here

This contest took place in the Philippines as part of Global Media and Information Literacy Week in October, 2022.

 

 


Friday, July 22, 2022

Antidote to Disinformation

Did Lawmakers Finally Figure Out That Critical News Literacy is the Antidote to Disinformation?

Here's an insightful piece on critical news literacy and how education is a solution.  How do you teach critical news literacy? Feel free to share thoughts.

Read the full story here