Tuesday, June 25, 2013

Reflections from ISTE


The Blogger's Cafe is nearly empty.

Most conference-goers have left the building for the River Walk and dinner plans. I had a late lunch so I've got a little time on my hands. Time for some reflection.

I presented at one session here in San Antonio. It was actually a BYOD workshop entitled "Five Mini Lessons in Information Fluency." From what I could tell it went well. Later tonight I'll email the participants and share with them more resources for taking the ideas home.

The hardest mini-lesson to get across, in my estimation, is browsing. The concept of browsing isn't difficult, although it can be a tricky way to search. Using a search engine is the other way and that is more oriented to efficiency and results. Browsing is an adventure and it's hard to define what strategy always works best, other than to say that this way of searching is like playing a game of "hot or cold" where the objective is to keep getting warmer until the information being sought is found. Unlike the childhood game, there is no one to tell you if your browsing is getting you to a warmer or colder place. It's an activity where interpretation and evaluation happens with every click. It seems a lot easier than that, but an incredible amount of time can be wasted by browsing.

Squeezing five lessons into 90 minutes should have been snap, but we were rushed. Consequently, a teaching method I would like have used with browsing was passed over while we talked through the steps. Talking isn't a great way to teach browsing. It's ultimately a hands-on activity.

But the big danger is that when a class full of students starts to browse, whether they are in fifth grade or teachers and librarians at an ISTE conference, the "aha" moments are hard to capture. That's why I recommend using a tag-team approach to browsing practice. Provide only one computer. Assign a challenge to the group. Ask for a volunteer to come up to the front to drive the computer for one decision. View the result of the student's choice as a group. Decide if things are getting warmer or not. It's unlikely the student will get the information needed with one click. Therefore, the other decision this student makes is to select the next driver (or just go down the aisle or around the circle, as you wish).

After each click--no student gets more than one mouse press--elicit a group response: warmer or colder? If colder, the next student might merely want to hit the BACK button to return to a warmer place. As an alternative, you could spice up the activity by providing a "phone a friend" option if someone is really stuck.

Here's the challenge I used at ISTE for which I could have used this approach. I showed a typical Language Arts assignment to write a paper on the American Dream. Other than financial prosperity, I required the participants to identify other themes using a Subject Directory. The top level of the Subject Directory lists categories like Home, Research, Sports, etc. The challenge is to mine down into a category (not all will be effective) to discover themes. It's a good use of browsing as a brainstorming strategy. In the workshop, this approach would have eaten up some time, but I think participants would have benefited from the interaction a lot more than the solo searching they did. It's an activity that works on a lot of levels: skimming, recognition of relevance, finding relevant and new keywords to follow in the results, failed attempts, persistence....

Here's a link to the Lesson prompt I used. Perhaps you can find ways to use an activity (not necessarily the American Dream content) like this with your students.

Tomorrow morning I head back to Chicago and home. It was good to take a moment to reflect in the Blogger's Cafe.

Friday, June 7, 2013

Freshness Dating Wizard


One of the more elusive search feats is to determine a date of publication. It's right up there with trying to track down an author when the creator of information uses a pseudonym or no name at all.

This prompted me to create a new search wizard to retrieve metadata from pages. In this case, the metadata is http header information that is transmitted when pages are sent by a server. If the pages are htm or html (static Web pages), some of the metadata includes Last-Modified information, which may be a clue to the age of the information.

Last Modified information may be retrieved from Firefox using Page Info (right click on the page), but it seems to have disappeared from other browsers. Since students who use our Information Researcher challenges don't always have access to Firefox, providing another search tool seemed a good idea.

Last Modified information is not an exact way to determine when material was created, but it is useful. For example, if you search this html blog post (the one you are reading now) using the Wizard, you will get Last-Modified information for the last time the entire blog was updated. Blogspot is an example of a dynamically created page, not a static page. Elements of the page, namely the ads, have never appeared here before you clicked on it. If you search metadata for older blogs on the site you will see the same Last-Modified date. Another method is needed to determine the publication date of a blog, which is fairly easy to find at the top of the post itself or the URL.

Dynamically created pages don't really send Last-Modified data, but the day and time the server sent the information, which is when you searched for it. Students can be confused using Firefox for this reason. Dynamically created pages (those that have extensions such as .asp, .php, xhtml, and no extensions at all) are displayed in Firefox's Page Info as having a Last Modified date. In our Wizard, if you use the simple version it will tell you if Last-Modified is not available.

There's also a more comprehensive version of the Metadata search that retrieves server information, expiration date, cookie information, etc. for those who would like to see more information about a page, particularly dynamically created ones.

Try the new Wizard!

More information on Static and Dynamically created pages

Wednesday, April 24, 2013

Fake Tweet Result of Phishing

As follow-up to the story yesterday about @AP's fake tweet, it has been reported that the hacked message came about an hour after company employees received an expertly-crafted, spear-phishing email.

Spear-phishing is getting harder to detect as successful practices inform future "phishes." What doesn't work is abandoned, reworked and the culprit becomes increasingly less suspicious.

It may come as a surprise or not, but 19% of spear-phishing attempts are successful. Someone in an organization takes the personalized bait and hands out secure information.

The effects of spear-phishing can be avoided by fact checking. I haven't seen a copy of the message received by AP employees yesterday. It would be interesting to see it and fact check it.

Can anyone find it?




Tuesday, April 23, 2013

Fake Tweet Sends Stocks Plummeting

As many articles have already made clear, Americans will react to news that sounds like terrorism.

Today's fake tweet shows how sensitive consumers of information really are.

A hack attack on the Associated Press' Twitter account resulted in "an erroneous tweet" claiming that two explosions occurred in the White House and that President Barack Obama was injured. It didn't take long (2 minutes) for Twitter to suspend the @AP Twitter account.

More than 4,000 retweets later, the credibility of the message was dealt a fatal blow when an AP spokesperson told NBC News the news was false.

Like the EKG of a country, the Dow Jones industrial average just after 1 p.m. shows the collective heartbeat (above). More than 140 points was lost in a flash. Five minutes later much of the loss was regained.

According to Bob Sullivan, NBC News: "It's incredible what a single 12-word lie can do."

How could being an investigative searcher make a breaking lie less effective?

Fact checking the accuracy of the claim is a little trickier in the case of Twitter. Breaking news often comes through this channel before being picked up by major news.

That is probably the clue. AP wouldn't be the first to break the news. Someone on the scene would have said it first; AP would carry it a minute or more later. All one would have to do is look for the source of the AP tweet.

Not being able to find an earlier tweet about this news is the tell-tale sign about its credibility.  A good search engine for tweets is topsy.comhttp://topsy.com. Check it out before you react with your gut.

Tuesday, April 16, 2013

Crap Detection 101

Amateur Whale Research Kit?
Howard Rheingold is credited with Crap Detection 101: How to tell accurate information from inaccurate information, misinformation, and disinformation.

Put your crap detector to work here: http://www.icrwhale.net/products/amateur-whale-research-kit

Some of the usual investigative techniques (backlinks, fact checking) don't work very well. What is it that "tells" you this information, at face value, cannot be trusted?

Wednesday, March 13, 2013

High Cost of Being Gullible

The price of cyber crime is astounding.

  • UK Guardian: Consumers and businesses in the UK lost an estimated £27 billion in 2012 due to cybercrime.[i] 
  • Ponemon Institute: The average annualized cost of cybercrime for 56 benchmarked U.S. organizations is $8.9 million per year.[ii]  
  • People’s Public Security University of China: In 2012, economic losses from Internet crimes in China totaled an estimated $46.4 billion (RMB 289 billion).[iii]
And it's growing annually.

So what does being gullible cost the average American?

See if you can find the cost to the average Senior Citizen in the US today.

What does this say about the need to investigate online information?


[i] John Burn-Murdoch, “UK was the world’s most phished country in 2012 – why is it being targeted?”, www.guardian.co.uk, last modified on February 27, 2013, http://www.guardian.co.uk/news/datablog/2013/feb/27/uk-most-phishing-attacks-worldwide.
[ii] “2012 Cost of Cyber Crime Study: United States” Ponemon Institute, October 2012, http://www.ponemon.org/local/upload/file/2012_US_Cost_of_Cyber_Crime_Study_FINAL6%20.pdf
[iii] “Internet crimes cost China over $46 billion in 2012, report claims”, thenextweb.com, last modified January 29, 2013, http://thenextweb.com/asia/2013/01/29/china-suffered-46-4b-in-internet-crime-related-losses-in-2012-report/.

Friday, January 18, 2013

Invisible Query

Time flies! I've neglected this blog for about 6 weeks.

Dennis O'Connor and I are deep into authoring a book on Teaching Information Fluency. Our deadline is the end of April.

Writing a book is a discovery activity for me. Last time I wrote this much was my dissertation and I discovered plenty about flow and mathematics while doing that.

This time, while it would seem I've traversed the topic of information fluency through this blog and the 21st Century Information Fluency Project website, there are still Aha! moments.

As I was thinking about the process of querying, it occurred to me that there's a lot more to it than translating a natural language question into a query. That's just the visible query--the one that search engine responds to. There's also an invisible query, the one you don't enter into the text box. The keywords or concepts that remain in your head.

These help you filter the results of the query. Some results are more relevant than others, not due to their ranking, but because you have some priorities in mind the search engine is unaware of.

It's generally ineffective to enter everything you're looking for in a search box.  It constrains the search and produces fewer results--sometimes none. It's better to submit two or more (keeping it a small number) keywords and scan the results for your invisible query.

Using one of our classic examples, "How many buffalo are there in North America today?", a good query is buffalo north america (bison is better than buffalo). Yet that's not really enough information to answer the question which is going to be 1) a number and 2) as recent as possible. That's the invisible part that you have to remember throughout the process. You choose results that satisfy 1 and 2; if not, you're probably not answering the question.

One premise of the Filter Bubble is that the machine is learning from us and will hone its output to our preferences. This becomes a harder task when we are not feeding the machine everything we have in mind. It may be a pretty good way to keep the Filter Bubble from encompassing us.

Think about what you're not querying that you are still looking for next time you search.