Wednesday, March 5, 2025

Expand Your Scope

 
While working out today at the gym, I caught this statistic on Fox News in the context of reactions to Trump's March 4 speech to Congress:

21% of voters approve of Democratic leadership in Congress

I don't recall seeing a citation for that number, but with a little searching, it comes from a poll conducted by Quinnipiac University published on February 19, 2025:  https://poll.qu.edu/poll-release?releaseid=3919 

Thinking there might be more to the story than a record low number of people approving of Democrats, here's more from the same poll that wasn't included in the Fox News coverage:

  • 21% of Democratic voters approve of Democratic leadership in Congress
  • 40% of Republican voters approve of Republican leadership in Congress
  • 49% of Democratic voters disapprove of Democratic leadership in Congress
  • 52% of Republican voters disapprove of Republican leadership in Congress

When it comes to disapproval ratings, Republicans lead Democrats. This was not part of the Fox coverage I saw.

Moreover, the 21% approval statistic was inserted into a broadcast about Democratic behavior during Trump's Joint Session of Congress speech last night. The poll results shed no light on people's reactions to Democrats on March 4, since it was taken in mid-February. The dates don't match up--always an important fact to check.

There are certainly more details in the Quinnipiac poll than reported here, and certainly more than what Fox News reported. It helps to expand one'sscope to make sense of numbers.

Thursday, February 27, 2025

SMS Scam: Toll Ways Notice of Evasion


 This message appeared in my text messages this morning:

The Toll Roads Notice of Toll Evasion: You have an unpaid toll bill on your account. To avoid late fees, pay within 12 hours or the late fees will be increased and reported to the DMV.

https://secure.getipassce.xin/payabill

(Please reply Y, then exit the text message and open it again to activate the link, or copy the link to your Safari browser and open it)

The Toll Roads team wishes you a great day!
It's the kind of message that makes you wonder if you missed a toll payment (especially if you live in an area that has road tolls, which I do). Rather than click on the link and follow the directions in the email to make a payment--which should ALWAYS be a red flag (even if it really isn't)--I did a quick check online with the following query:

Toll Roads Notice of Toll Evasion

That phrase taken from the message is specific enough to match any information online of a similar nature. Sure enough, there are reports of this item as a "smesh:"  i.e., a scam text message.  Some online sources addressing this had contacted their state Department of Transportation to verify if it was real or a fake, receiving confirmation it was not real.

 Checking secondary sources online for evidence of a fake or scam is one method to avoid falling prey to false information.

There is at least one other clue in the message of its questionable veracity: the URL. The extension used in .xin which is Chinese. Why would someone in China be sending you a tollway notice? That and the odd directions for making a payment by copying the URL to a browser, etc. just sounds fishy.

Always double check before opening your wallet!

Thursday, February 20, 2025

Bad at Math?

 There is plenty to fact check in the news these days.

A recent statement on the DOGE.gov website is attracting attention:

DOGE's total estimated savings are $55 billion. source

A screen shot of one of the contract savings is shown below. The page shows a discrepancy between the number in the outlined box, $8,000,000.00, and the number at the bottom of the page, $8,000,000,000.

That's not the same number. Which should be believed?  This is a clear case of putting more trust in an authentic source document rather than the commentary on it. Was it just a typo? If so, the second number is off by a factor of 1,000.  A larger version of the image may be found here.

The image is a screenshot of a contract cancellation of a "program and technical support services” for ICE’s Office of Diversity and Civil Rights, as described in federal records. DOGE cancelled a smaller number than they claimed. Why? Bad at math?

The $8 billion dollar amount is part of DOGE's $55 billion claim. Are there other errors as well?

When in doubt, search for an original source.
 

Tuesday, February 18, 2025

Follow Zelensky's Approval Ratings

 


At a press conference today, President Trump is recorded as saying:

"The leader in Ukraine is down at 4% approval ratings. Wouldn't the people of Ukraine need to have an election? Ukraine is being wiped out." source

Numbers and proper nouns make the best search terms. This goes for speculative searching (searching for something where you aren't sure you'll find it) and investigative searching (evaluating information you found).  But it's not all as easy as searching for a number.

In this case, if you search for 4% approval ratings Trump OR Ukraine, you'll retrieve the quote found above and other reporters' coverage of the news conference. This doesn't mean that because the number appears in multiple locations it should be taken as a fact (the only fact is that Trump said it).

In order to check if 4% is, in fact, the current approval rating of Ukraine's leader, a better query would be:

Zelensky approval rating

Some of the results will be about Trump's press conference remarks since that is fresh news as of today. Examining the first page of results (in Google):

Fox News: Zelenskyy faces perilous re-election odds as US, Russia ... 

Mujtaba Rahman on X: "Where is this 4% approval rating ... 

The New York Times: Zelensky Could Face Tough Re-election Prospects, Polls ...

Statista: Volodymyr Zelenskyy's approval rating in Ukraine 2019-2024 

Yahoo: Donald Trump claims Zelensky only has a 4% approval rating 

The Brussels Times: Ukraine: President Zelenskyy's popularity took a dip in...

The articles may be skimmed to see if they are just reporting what Trump said, or have additional information on Zelensky's approval ratings.  Fox News, the thread on X, the New York Times, Statisa and the Brussels Times are all in agreement: Zelensky's approval ratings (last time they were sampled in Dec. 2024) placed him at a 50% approval rating. This was down 40% since the start of the conflict in 2022.

Remember what's important here: President Trump did not take a survey, he reported on one. His numbers don't agree with any other sources. Where did he come up with the number 4%. That is a question no one in the room asked him. But plenty of sources fact checked him before passing along his erroneous claim.

This analysis from Statisa comes to a different conclusion:  

October 2024, nearly seven out of ten Ukrainians approved of the activities of Volodymyr Zelenskyy as the president of the country.

Tuesday, February 11, 2025

Coming Up Empty: Where's the Evidence?

 


A good test case for trust comes when a claim is made and no evidence is provided.

Take for instance two statements made by Elon Musk today in the Oval Office and reported on numerous news services:

  • Some officials at the now-gutted U.S. Agency for International Development had been taking “kickbacks.” Musk said “quite a few people” in that agency somehow had “managed to accrue tens of millions of dollars in net worth while they are in that position.” 
  • Musk also claimed that some recipients of Social Security checks were as old as 150.

No specific examples or fraud or evidence for the claims were provided. So what do you do?

  1. Do do know Musk personally? If so, you may have some important context to make up for the missing information. Does he have a history of telling you things that are factual or not? Most people don't have a relationship with Musk, so few can use "personal knowledge" to decide if the information source can be believed.
  2. Can you check out the information to determine if it can be trusted or not? Lacking evidence makes this hard to do. This is an evolving news situation--there is only anecdotal information provided by one person.
  3. Do you believe the information without evidence, taking it on blind faith that other people in room (e.g., Trump) goes along with it? Note that Trump doesn't exactly corroborate Musk's claim and was surprised at the results: 

    [President Trump] said he thought it was “crazy” that DOGE has been able to find so much fraud and waste in the federal government, arguing “we had no idea we were going to find this much.”  source

Couple this with another statement Musk made when asked about the truth of other claims he has made:

“Some of the things that I say will be incorrect and should be corrected. Nobody’s going to bat 1.000,” Mr. Musk said. “We all make mistakes. But we’ll act quickly to correct any mistakes.” source

Whenever you come up empty on evidence and lack a personal history with an information source, it's never a good idea uncritically to accept the information on blind faith. It's impossible to make an informed choice when evidence is lacking. 

What evidence can you find to fill those empty hands?

Addendum: On Feb. 12, 2025, the New York Times posted this fact-checking article regarding Musk's statements: https://www.nytimes.com/2025/02/11/us/elon-musk-doge-fact-check.html

Sunday, February 9, 2025

How do you test trust?

February 9, 2025

How do you decide what is a truth and what is a lie?

Trust

The lead question is essentially the same as asking, "what information do you trust?" The usual answers include:

  • I trust someone I know who has a proven track record of saying trustworthy things--in other words, I believed them and it turned out well.
  • I trust someone or some organization I don't know personally who has a good reputation--others report trusting them and believe it turned out well.
Today there is a lot of disagreement in current politics, religion, society and culture about who to trust. People have opposite views about individuals, news sources, and authorities. The question to ask isn't "Do I trust them?" but "if I believe them, what happens?" At some point everyone has to act on the information they receive, otherwise there is no going forward.
The test for trustworthiness is "Do the results make this source one I can continue to trust?" The danger in this approach is that you may waste your time, money, or be physically harmed (e.g., walking on thin ice).

Authority

What makes someone an authority? This is nearly identical to knowing how to trust someone.

  • I know the person and he or she tells me reliable things that I can verify by trying them.
  • I've never had personal experience with the person (or organization) but people who I respect tell me they are a reliable authority.
It's impossible to know everyone. People we are close to are the easiest to trust (or mistrust) because we have first hand information about them. We aren't close to the majority of information sources in our world, therefore we depend on sources we think we know something about to tell us if the information from others is reliable. 
Here's where a lot of erroneous assumptions get made. 
Unless we do our own research we cannot know if something we believe to be true can be trusted. That's hard work. It's a lot easier to believe stuff we see or hear that agrees with things we already value.
So here's something to try: act on the information you want to test. Either read up on it from a variety of sources, or just trust your gut. See what results you get. But be careful, something may happen you don't expect or want. Take small steps at first--is the information something you can trust? Then share your findings with others who trust you.

In the days ahead, we'll apply this test to claims made online by individuals and organizations we don't know personally.

Friday, January 31, 2025

Fooled by AI?


Opportunities to be misled by online information appears to be on the rise, according to 1,000 American teens who participated in the following study by Common Sense Media.

Research Brief: Teens, Trust, and Technology in the Age of AI

These teens' realizations are worth factoring into conversations around the content verification efforts, or lack thereof, of online platform providers. The implication is that content trust very much matters to our current 13 - 18 year olds.

Thursday, January 30, 2025

Facts v. Speculation

Case Studies in the News

As individuals and news outlets report on the tragic mid-air collision of American Airlines Flight 5342 and a military helicopter, very different responses help to illustrate the accuracy and reliability of information cited.

Facts

Source: Associated Press -- "Skaters Jinna Han and Spencer Lane were among those killed, along with their mothers, and coaches Evgenia Shishkova and Vadim Naumov, said Doug Zeghibe, CEO of the Skating Club of Boston, during a Thursday news conference."

Noteworthy in this report is the presence of names that may easily be fact-checked. Proper nouns and numbers are excellent terms for investigative searching, as they may be corroborated--or not--by other sources.

Speculation

Source: CBS News -- "Asked directly how he came to the conclusion that diversity had something to do with the crash, Mr. Trump replied, 'because I have common sense.'"

Noteworthy in this report is the lack of evidence cited and in its place the role of common sense. Common sense may seem trustworthy to the person who cites it, but there are many examples when common sense fails to foresee or prevent unwanted results. Furthermore, there is no way to fact check a personal belief about common sense other than to trust the person responsible or doubt that common sense is always right.

Monday, April 29, 2024

Become a Host Site

 


Your organization may now purchase resources and tools that we developed over the past 20+ years to strengthen information fluency. If you've found our live search challenges, keyword challenges tutorials (how to query, evaluate information and avoid plagiarism) and citation wizards useful with your students (and staff) you can keep them alive on your own site.

A few assets have been removed, but most of our site is still up and running, mainly because potential partners are considering which assets they want to host on their own sites.
 
If you also have an interest in obtaining the rights to materials we created, please write to Carl Heine, managing partner at carl@21cif.com.
 
More info and prices here: Product Information

Tuesday, March 26, 2024

The end of an era: Information Fluency is closing

 

On April 25, 2024, 21st Century Information Fluency will close its site. After 23 years, having supported countless librarians and teachers with resources for navigating the fast-moving waters of the Internet and helping students find, evaluate and use information ethically, we will no longer be accessible.

If your institution is interested in in acquiring any of our training resources or tools (e.g., Citation Wizards, MicroModules, Search Challenges, etc.) please contact us to find out more: carl@21cif.com

Saturday, March 25, 2023

A first look at Google's Bard AI Search Engine


I recently signed up to try Bard, Google's new AI search engine. As the site says, Bard is still in its experimental stage and won't necessarily find the right answers. This disclaimer may have been prompted by the embarrassing mistake Google made when they published Bard's now famous inaccurate answer to a space telescope query that precipitated a billion dollar market devaluation for Google.

So, as an experiment on the experimental platform, I entered a classic search challenge: "How many buffalo are there today in North America?" (I didn't place quotes around the query.) The new AI platform should be proficient in parsing the meaning which isn't tricky, except that a better term for buffalo is bison, which Google quickly corrected.

The first result was reasonable sounding: 400,000 bison in North America. This was accompanied by a description of bison. Something missing, however, was the citation. I could not tell from where Google had gathered this information. For anyone doing research, that is a big omission--making it impossible to fact check details from the source.

As I looked for a possible source, I clicked the New Response button. To my surprise, Google served up a different answer with no mention of a source: 1.5 million bison. I tried it a third time: 200,000 bison in North America. Fourth time: 500,000.

Third Query
Clicking 'View other drafts' produced other numbers.

Of course, the question is "Which number is right?" They can't all be.

These results are essentially the same as entering the query in regular Google and looking at the first page of results. The numbers are all over the place. To determine which has sufficient credibility, one needs to look at the source, the publication date and what organizations link to the information.

Practically speaking, it may not be possible to determine the best number of bison. That is why the recommendation for using information is to cite the source (according to... the number is...). Bard doesn't make that possible (yet). Let's hope the developers behind Bard see the benefit of providing source details as they continue to refine it.



Thursday, February 16, 2023

At a Crossroads? The Intersection of AI and Digital Searching


Microsoft's foray into next generation searching powered by Artificial Intelligence is raising concerns.

Take, for example, Kevin Roose, a technology columnist for The New York Times, who has tried Bing and interviewed the ChatGPT bot that interfaces with Bing. He describes his experience as "unsettling." (Roose's full article here). 

Initially, Roose was so impressed by Bing's new capabilities he decided to make Bing his default search engine, replacing Google. (It should be noted that Google recognizes the threat to its search engine dominance and is planning to add its own AI capabilities.) But a week later, Roose has changed his mind and is more alarmed by the emergent possibilities of AI than the first blush of wonderment produced by AI-powered searching. He thinks AI isn't ready for release or people aren't ready for AI contact yet.

Roose pushed the AI, which called itself 'Sydney,' beyond what it was intended to do, which is help people with relatively simple searches. His two hour conversation probed into existential and dark questions which made him "unable to sleep afterwards." Admittedly, that's not a normal search experience. Microsoft acknowledged that's why only a handful of testers have access to its nascent product at the moment.

All this gives a feeling we are soon to be at a crossroads and what we know about search engines and strategies is about to change. How much isn't certain but there are already a couple warnings:

  • AI seems more polished than it is. One of the complaints from testers like Roose is that AI returns "confident-sounded" results that are inaccurate and out-of-date. A classic in this regard is Google's costly mistake of publishing an answer generated by its own AI bot (known as Bard) to the question, "what telescope was the first to take pictures of a planet outside the earth's solar system?" Bard came back with a wrong answer, but no one at Google fact-checked it. As a result, Google's parent company Alphabet lost $100 billion in market value. (source)
  • AI makes it easier to use natural language queries. Instead of the whole question about the telescope in the bullet above, current search box strategy would suggest TELESCOPE FIRST PLANET OUTSIDE "SOLAR SYSTEM" is just as effective as a place to start. Entering that query in Google, the top result is from a NASA press release on Jan 11, 2023 which doesn't exactly answer the question, but is probably why Bard decided that it did. Apparently AI takes a very human leap to thinking it found the answer to the question when, in fact, the information answers a different question: "what telescope was the first to confirm a planet's existence outside the earth's solar system?" This demonstrates one of the five problems students have with searching: misunderstanding the question. AI isn't ready yet to take care of that problem.

There's much more to come on this topic.

Tuesday, February 14, 2023

New: GUIDED Search Challenges

I realized not long ago that TIMED search challenges were out-of-step with my current thinking about information fluency. 

Being fluent doesn't mean locating the "right" answer everytime, or on the first attempt or as fast as possible. A timed challenge puts pressure on the searcher, but this is not how it is in the real world. What matters when one is trying to find information that 1) is not yet known and 2) is in a place that is still unknown is being able to locate it, even after multiple failures. That can still be fluency. 

As a result, the previous 7 Timed Search Challenges have been archived--they are still available--and a new format has be introduced. Instead of unlimited attempts, now one gets 5 tries, each time with an expert search hint to guide the process. 

Search challenges like these are not intended for purposes of evaluation, but learning: learning to think like a digital researcher who is fluent with a variety of search box strategies. 

Give them a try! Some are familiar and some are new. There are now 8 Guided Search Challenges, followed by 8 more in a series called Needle and Haystack.  

Guided Search Challenges

Monday, January 30, 2023

Guided Search Challenges

Taking a lesson from my last post, I refreshed the Needle and Haystack Challenge series I created a couple years ago on the Information Fluency site. I realized that the "game" didn't teach much about search strategy. Instead, it was focused primarily on language skills. 

Over the weekend I refreshed my earlier work to embed search hints instead of having students try to figure out mystery clues that would guide them to the right information. In the process, I replaced the Identity Challenge with a new one that reinforces the keyword selection process instead of selecting the right database to search. The Identity Challenge, trying to find the unidentified author of an image, would be better as part of a series on knowing WHERE to search, not WHAT WORDS to use.

There are four search challenges in the current set:

  • ACORN -- finding the name of an obscure part of an acorn
  • INTRUDERS -- finding the first known instance of a wall that failed to keep out intruders
  • HAUNTED-HIKE -- finding the location of a hike reputed to be one of the most haunted places
  • RECLAMATION -- finding out the budget for a massive land reclamation project in Singapore

Each one is worth up to 5 points. The scoring follows the 1-in-5 Rule: on average, you have a 1 in 5 chance of using the same keywords on your first search as the person who wrote the information you are looking for. Find the answer to a challenge on the first try and you earn 5 points. If you take more than 5 tries, you earn nothing but we explain the answer. Along the way, search hints are provided that an expert researcher might use.

Curious? Give it a try. It's a free tool to help students test their ability to find better keywords. It also reinforces the practice of looking for better words in search results when the information there doesn't answer your question. 

Needle and Haystack Challenge

Tuesday, January 17, 2023

How I failed an Information Literacy Assessment

 I often "check out the competition" so to speak. This time it was NorthStar, a St. Paul, MN-based literacy company that offers assessments covering a range of topics from information literacy to operating systems, software packages and career search skills.

Their information literacy assessment consists of 32 performance-based and multiple choice items woven around the stories of three individuals involved in information literacy tasks. It's quite easy to take the assessment, assisted by audio storytelling. I thought I did pretty well and then I got a report at the end informing me I had failed with a 74% accuracy rate.

So I took the assessment again.

Not all the items seem specifically linked to what I'd call information literacy. Several depend on having lived circumstances similar to the case studies.  I did fine on these, having experienced financial deprivation, for example. Nonetheless, answers that might make sense are counted wrong if they violate an implicit principle such as 'don't go deeper into debt by taking out a loan if you are already in debt.' That lesson has to be learned by reading or listening to sage advice or the hard way, by accumulating debts. It's not an information literacy skill, yet it is assessed as one.

Another item resembles an information literacy skill, knowing for what to search. Provided with a list of criteria for finding a job, the task essentially is to click synonyms that match the criteria. Research demonstrates that this is one of the key failures that students make when searching: knowing what to search for. However, the assessment uses these as indicators to tell if and when one finds matching information. Knowing how to find answers in the first place is usually the real challenge and where students tend to stumble.

Among other items that seem removed from information literacy are project management, reading, a basic understanding of careers in healthcare.  Without a doubt information literacy depends on fundamental skills like knowing a language well enough to use it, thinking methodologically, being persistent, learning from failures and a host of others. But these are all primary skills and dispositions. Information literacy is a secondary skill that builds on them. If a student fails in such primary tasks, the solution is not information literacy training.

The assessment does contain some good examples of information literacy:

  • identifying optimal keywords that match one's search criteria
  • Distinguishing between ads and other content
  • How to use search engine filters
  • Knowing how to read results
  • Knowing how to navigate a Web page
  • Knowing where to search for relevant information
  • Evaluating the "fit" of information found

The second time I took the assessment I was more careful and I passed. I still missed three items, though I don't consider them fundamental to information literacy.

Questions that remain:

  • Is knowing how to create a spreadsheet or how to bookmark a page an information literacy skill?
  • In what ways are information literacy or fluency skills distinct from computer or software proficiencies? One answer to this is the Digital Information Model found here.
  • What is a passing score for information literacy? When I failed with a 74% the first time and passed the second time with 87% it reminds me that a numerical cutoff for this cluster of secondary skills is really hard to justify. No one performs at 100% all the time as an effective, efficient, accurate and ethical consumer of online information. We strive to be better than 50%, however. That's why the threshold is set low on our assessments and 75% is considered mastery. That number is borne out in search results from our studies. Being right 3 out of 4 times is a pretty decent accomplishment in the online Wild West.

Thursday, November 24, 2022


In today's "Information Fluency/Literacy" search feed, I found this article:

Students create content to fight disinformation, revive media trust

I've always valued students creating content, not just curriculum writers. As a curriculum author, it's easy to create what one thinks will grab students' attention and result in learning. But experience has taught me that giving projects to students to complete is hard to beat in terms of attention-getting and self-directed learning. For that reason, I applaud the Out of the Box Media Literacy Initiative for their efforts establishing a contest inviting students to answer pressing questions about disinformation, hate speech, and media distrust. 
 
To participate in the contest, students prepared 90-second original videos. Here are the guidelines:
  • 1st Category: High school students
    How should a media and information literate individual address fellow citizens who are misinformed, hateful, or discriminatory?
  • 2nd Category: College students
    How can media and information literacy help in reviving public trust lost in the media due to disinformation and hate speech?

The winning submission in the high school category emphasized "the duty to promote a culture of critical thinking combined with compassion. 'While you come across many who are ignorant, take a moment to not only remind them, but yourself of your intentions. Engage, not isolate. Encourage, not demoralize.'” (Allen Justin Mauleon, 2022)

Watch the video here

This contest took place in the Philippines as part of Global Media and Information Literacy Week in October, 2022.

 

 


Friday, July 22, 2022

Antidote to Disinformation

Did Lawmakers Finally Figure Out That Critical News Literacy is the Antidote to Disinformation?

Here's an insightful piece on critical news literacy and how education is a solution.  How do you teach critical news literacy? Feel free to share thoughts.

Read the full story here

Tuesday, July 19, 2022

Financial Fluency


Information fluency applies to a variety of topics including financial fluency. 

We've created a new category to our Annotated Links that currently has one listing by the University of Denver that covers a range of topics related to financial apps:

  • Mobile Banking
  • Mobile Payments
  • Budgeting Apps
  • Cybersecurity Tips for Fintech Apps
  • Fintech Resources for Each Stage of Your Life 

Each section provides helpful step-by-step instructions to help reduce financial risk when using online resources. 

If you have similar resources to suggest, please send the links to our Help address.

https://21cif.com/resources/links/financial_fluency

Thursday, May 5, 2022

Beyond Information Literacy?

 

The differences between illiteracy, literacy and fluency are fuzzy, at best, when it comes to digital information competencies.

The Spring 2022 Feature article in the Full Circle Kit examines the lines between incompetence and fluency using the results of a study conducted by 21cif at Northwestern University's Center for Talent Development. 

The data suggests that a minimum competency for someone to be identified as 'literate' is a 60% success rate on search and retrieval tasks. The point at which fluency starts is less clear.

Read the whole article here

Tuesday, May 3, 2022

Recommended reading: Why we need information literacy classes By VICTOR SHI Chicago Tribune

 


The following article by Victor Shi, an eloquent Gen Z'er appeared recently in the Chicago Tribune (May 2, 2022). He makes a good argument for the need for information literacy instruction.

Fifty years ago, the national networks CBS, ABC and NBC dominated television screens in America and were the primary way voters obtained information. Each network, along with newspapers and radio, told its audience facts first, and all agreed on what the facts were. That meant Americans had a shared understanding of the truth — which is what led to the erosion of both Democratic and Republican public support for then-President Richard Nixon during the Watergate investigation.

But the time of Democrats and Republicans agreeing on facts is no more. In the early 1980s, cable news networks emerged. The late ‘80s and early ‘90s brought the internet, and Six Degrees became the first social media platform later in the ‘90s. With each development, avenues for information grew more abundant. People weren’t confined to newspapers and the three news stations for information. Instead, we gained the ability to access information anywhere — and with less and less scrutiny.

Read the whole article here