Sunday, February 9, 2025

How do you test trust?

February 9, 2025

How do you decide what is a truth and what is a lie?

Trust

The lead question is essentially the same as asking, "what information do you trust?" The usual answers include:

  • I trust someone I know who has a proven track record of saying trustworthy things--in other words, I believed them and it turned out well.
  • I trust someone or some organization I don't know personally who has a good reputation--others report trusting them and believe it turned out well.
Today there is a lot of disagreement in current politics, religion, society and culture about who to trust. People have opposite views about individuals, news sources, and authorities. The question to ask isn't "Do I trust them?" but "if I believe them, what happens?" At some point everyone has to act on the information they receive, otherwise there is no going forward.
The test for trustworthiness is "Do the results make this source one I can continue to trust?" The danger in this approach is that you may waste your time, money, or be physically harmed (e.g., walking on thin ice).

Authority

What makes someone an authority? This is nearly identical to knowing how to trust someone.

  • I know the person and he or she tells me reliable things that I can verify by trying them.
  • I've never had personal experience with the person (or organization) but people who I respect tell me they are a reliable authority.
It's impossible to know everyone. People we are close to are the easiest to trust (or mistrust) because we have first hand information about them. We aren't close to the majority of information sources in our world, therefore we depend on sources we think we know something about to tell us if the information from others is reliable. 
Here's where a lot of erroneous assumptions get made. 
Unless we do our own research we cannot know if something we believe to be true can be trusted. That's hard work. It's a lot easier to believe stuff we see or hear that agrees with things we already value.
So here's something to try: act on the information you want to test. Either read up on it from a variety of sources, or just trust your gut. See what results you get. But be careful, something may happen you don't expect or want. Take small steps at first--is the information something you can trust? Then share your findings with others who trust you.

In the days ahead, we'll apply this test to claims made online by individuals and organizations we don't know personally.

Friday, January 31, 2025

Fooled by AI?


Opportunities to be misled by online information appears to be on the rise, according to 1,000 American teens who participated in the following study by Common Sense Media.

Research Brief: Teens, Trust, and Technology in the Age of AI

These teens' realizations are worth factoring into conversations around the content verification efforts, or lack thereof, of online platform providers. The implication is that content trust very much matters to our current 13 - 18 year olds.

Thursday, January 30, 2025

Facts v. Speculation

Case Studies in the News

As individuals and news outlets report on the tragic mid-air collision of American Airlines Flight 5342 and a military helicopter, very different responses help to illustrate the accuracy and reliability of information cited.

Facts

Source: Associated Press -- "Skaters Jinna Han and Spencer Lane were among those killed, along with their mothers, and coaches Evgenia Shishkova and Vadim Naumov, said Doug Zeghibe, CEO of the Skating Club of Boston, during a Thursday news conference."

Noteworthy in this report is the presence of names that may easily be fact-checked. Proper nouns and numbers are excellent terms for investigative searching, as they may be corroborated--or not--by other sources.

Speculation

Source: CBS News -- "Asked directly how he came to the conclusion that diversity had something to do with the crash, Mr. Trump replied, 'because I have common sense.'"

Noteworthy in this report is the lack of evidence cited and in its place the role of common sense. Common sense may seem trustworthy to the person who cites it, but there are many examples when common sense fails to foresee or prevent unwanted results. Furthermore, there is no way to fact check a personal belief about common sense other than to trust the person responsible or doubt that common sense is always right.

Monday, April 29, 2024

Become a Host Site

 


Your organization may now purchase resources and tools that we developed over the past 20+ years to strengthen information fluency. If you've found our live search challenges, keyword challenges tutorials (how to query, evaluate information and avoid plagiarism) and citation wizards useful with your students (and staff) you can keep them alive on your own site.

A few assets have been removed, but most of our site is still up and running, mainly because potential partners are considering which assets they want to host on their own sites.
 
If you also have an interest in obtaining the rights to materials we created, please write to Carl Heine, managing partner at carl@21cif.com.
 
More info and prices here: Product Information

Tuesday, March 26, 2024

The end of an era: Information Fluency is closing

 

On April 25, 2024, 21st Century Information Fluency will close its site. After 23 years, having supported countless librarians and teachers with resources for navigating the fast-moving waters of the Internet and helping students find, evaluate and use information ethically, we will no longer be accessible.

If your institution is interested in in acquiring any of our training resources or tools (e.g., Citation Wizards, MicroModules, Search Challenges, etc.) please contact us to find out more: carl@21cif.com

Saturday, March 25, 2023

A first look at Google's Bard AI Search Engine


I recently signed up to try Bard, Google's new AI search engine. As the site says, Bard is still in its experimental stage and won't necessarily find the right answers. This disclaimer may have been prompted by the embarrassing mistake Google made when they published Bard's now famous inaccurate answer to a space telescope query that precipitated a billion dollar market devaluation for Google.

So, as an experiment on the experimental platform, I entered a classic search challenge: "How many buffalo are there today in North America?" (I didn't place quotes around the query.) The new AI platform should be proficient in parsing the meaning which isn't tricky, except that a better term for buffalo is bison, which Google quickly corrected.

The first result was reasonable sounding: 400,000 bison in North America. This was accompanied by a description of bison. Something missing, however, was the citation. I could not tell from where Google had gathered this information. For anyone doing research, that is a big omission--making it impossible to fact check details from the source.

As I looked for a possible source, I clicked the New Response button. To my surprise, Google served up a different answer with no mention of a source: 1.5 million bison. I tried it a third time: 200,000 bison in North America. Fourth time: 500,000.

Third Query
Clicking 'View other drafts' produced other numbers.

Of course, the question is "Which number is right?" They can't all be.

These results are essentially the same as entering the query in regular Google and looking at the first page of results. The numbers are all over the place. To determine which has sufficient credibility, one needs to look at the source, the publication date and what organizations link to the information.

Practically speaking, it may not be possible to determine the best number of bison. That is why the recommendation for using information is to cite the source (according to... the number is...). Bard doesn't make that possible (yet). Let's hope the developers behind Bard see the benefit of providing source details as they continue to refine it.



Thursday, February 16, 2023

At a Crossroads? The Intersection of AI and Digital Searching


Microsoft's foray into next generation searching powered by Artificial Intelligence is raising concerns.

Take, for example, Kevin Roose, a technology columnist for The New York Times, who has tried Bing and interviewed the ChatGPT bot that interfaces with Bing. He describes his experience as "unsettling." (Roose's full article here). 

Initially, Roose was so impressed by Bing's new capabilities he decided to make Bing his default search engine, replacing Google. (It should be noted that Google recognizes the threat to its search engine dominance and is planning to add its own AI capabilities.) But a week later, Roose has changed his mind and is more alarmed by the emergent possibilities of AI than the first blush of wonderment produced by AI-powered searching. He thinks AI isn't ready for release or people aren't ready for AI contact yet.

Roose pushed the AI, which called itself 'Sydney,' beyond what it was intended to do, which is help people with relatively simple searches. His two hour conversation probed into existential and dark questions which made him "unable to sleep afterwards." Admittedly, that's not a normal search experience. Microsoft acknowledged that's why only a handful of testers have access to its nascent product at the moment.

All this gives a feeling we are soon to be at a crossroads and what we know about search engines and strategies is about to change. How much isn't certain but there are already a couple warnings:

  • AI seems more polished than it is. One of the complaints from testers like Roose is that AI returns "confident-sounded" results that are inaccurate and out-of-date. A classic in this regard is Google's costly mistake of publishing an answer generated by its own AI bot (known as Bard) to the question, "what telescope was the first to take pictures of a planet outside the earth's solar system?" Bard came back with a wrong answer, but no one at Google fact-checked it. As a result, Google's parent company Alphabet lost $100 billion in market value. (source)
  • AI makes it easier to use natural language queries. Instead of the whole question about the telescope in the bullet above, current search box strategy would suggest TELESCOPE FIRST PLANET OUTSIDE "SOLAR SYSTEM" is just as effective as a place to start. Entering that query in Google, the top result is from a NASA press release on Jan 11, 2023 which doesn't exactly answer the question, but is probably why Bard decided that it did. Apparently AI takes a very human leap to thinking it found the answer to the question when, in fact, the information answers a different question: "what telescope was the first to confirm a planet's existence outside the earth's solar system?" This demonstrates one of the five problems students have with searching: misunderstanding the question. AI isn't ready yet to take care of that problem.

There's much more to come on this topic.