Tuesday, November 23, 2010

The Unevenly-Divided Future, or, Leveling the Digital Divide

This course has provided me with so many opportunities to re-visit some of my favorite pre-doctoral literary favorites, including William Gibson. I finished his novel "Idoru" while riding home on the train from evenings on the town in Tokyo several years ago while on a business trip to Japan. The term "prescient" applies here, as I would look up from the book and still feel immersed into the world that Gibson had created in a slightly more modern Toyko. Gibson has an interesting way of looking at the world, and has a unique turn of phrase in his writing, and I've enjoyed all of his books. The statement "The future has already arrived--it's just not evenly distributed" was excerpted from a National Public Radio interview with Terry Gross on the "Fresh Air" program, November 30, 1999.

This quote speaks volumes, very succinctly stating the nature of what we refer to as the "Digital Divide". A separation of haves from have-nots with regard to technology and information--however, it is my own point of view that this divide is being closed rather quickly, as information access used to be the exclusive province of those with assets and connectivity, all of which required funds. However, we see in today's world that information access and the technology needed to get it have become cheaper and more readily available. The advent of wide area wifi networks, and the affordable cost of smartphones with the ability to access them has made the access concept more of an on-ramp than a barrier.

An additional factor to consider is that with large-scale manufacturing of these hand-held devices, the concept of "economy of scale" comes into play: the more that are made, the less they cost per unit as companies recoup their R&D costs and set-up costs for assembly lines. The end result is that the hardware is almost free, and any cost is the result of services required that are supplied by wireless carriers. These fees have gone down significantly in past years: I recently switched carriers and bought a new iPhone, and managed to save significantly on our monthly household wireless bills by going with a family plan that gives me a full data package plus calling and text messages and a separate line of service for my husband's phone with calling only (he is a phone Luddite) for less than I paid for my original data service only on the other carrier. Once again, economy of scale.

The other example of economy of scale that came to me as I listened to the podcast and vodcast was the "One Laptop Per Child" initiative spearheaded by Dr. Nicholas Negroponte of MIT. This foundation's goal is to provide every child in the world with a basic low-cost connected laptop computer that has software already loaded in order to support and encourage them to learn, connect, and change the world as they do so. The success stories from this program are compelling, and the push for computer literacy as well as basic literacy will help nations who have traditionally been on the wrong side of the divide to close and bridge the divide.

Another initiative that is not so much technical but one which promises to institute change for the better in the world's poorest nations is the work of Dr. Greg Mortenson, the author of "Three Cups of Tea" and "Stones into Schools". Mortenson believes that education, particularly the education of the girls who will raise the next generation of children. Not only does literacy have a direct effect on infant mortality, but it also exposes children to new ideas and concepts that they would not otherwise have contacted. His efforts have been concentrated in Afghanistan and Pakistan, and he has succeeded more often than not, even though he is changing cultural norms and beliefs.

For me, the path to closing the digital divide is going to require both technology as well as basic literacy. Together, those two factors create a powerful set of tools for their users.

References

Central Asia Institute. (2010). Accessed at http://www.ikat.org/

One Laptop per Child. Accessed at http://laptop.org/en/vision/

Mortenson, G. & Relin, D. O. (2006). Three cups of tea: One man's mission to promote peace . . . one school at a time. New York: Penguin Books.

Soloway, E. (2009). Podcast.

Wednesday, November 10, 2010

DVD vs. Video On Demand...and the winner is...

When our class was notified that a future assignment would involve the viewing of a movie based on a Phillip K. Dick novel, and I looked at the list, I knew exactly which film I wanted to see—Blade Runner (based on Dick’s novel “Do Androids Dream of Electric Sheep?”). Furthermore, rather than borrowing a copy from a friend who owns multiple versions and formats, or renting it from somewhere, I opted to treat myself and go buy the Director’s Final Cut package on Blu-Ray DVD (the 5-disc set with all the special feature material).

Like most technology decisions in my house, there were consequences. This decision required several additional steps to be taken, the first of which was to unpack the new Blu-Ray player that had come with my new camera and was still in the original shipping box ten months after its arrival at our house. Once unpacked, it had to be connected with our system, which is an engineering challenge even on a good day. This required finding several additional cables as well as pulling the system switcher/amp out of the cabinet and patching the audio and video cables in appropriately without displacing any of the other components.

Once connected, I turned on the projector and crossed my fingers—it WORKED!!! I was then able to sit down and view the film in its full glory in the form director Riddley Scott imagined it, while taking notes on the technologies that appeared on the screen.

Yes, there were probably easier ways to accomplish all of this, but without a compelling reason, nothing changes. I am gradually building my DVD collection of films that I have enjoyed or believe that I really want to see, in hopes that someday I will have time to view them. My husband, on the other hand, cannot be bothered with DVDs, as he has all the shows he has collected off of the various cable channels (which he watches incessantly) that aired while he was not able to watch in real time, so he had the DVR watch for him. On any given day, he has two day’s worth of viewing sitting there waiting to be viewed. In addition to all of this, he has us signed up for several movie pay channels (I honestly don’t know what we have, as my TV viewing habit usually consists of the Weather Channel while eating breakfast in the morning before heading out to work).

We have never actually used the On Demand function on our cable system. We do not subscribe to Netflix or Blockbuster because no-one has time to watch additional movies. On the other hand, no-one has ever called us normal with a straight face. It’s OK—we manage.

In the real world, people do seem to care about these matters, so for the purposes of meeting the assignment requirements, I would suggest that the competition between DVDs and video on demand is an example of increasing returns. The point that is made by Dr. Thornburg in the vodcast (Laureate, 2008) is that the technology that wins in an increasing returns contest is often sub-optimal, and this is in fact the case. If one considers the resolution and image quality of a film viewed from a Blu-Ray DVD and one viewed from the cable on-demand system, as the image produced by the Blu-Ray is significantly better.

The reason behind this is not so much the image, but what happens to the image between the source and the display: all High-Definition images begin at 1920 x 1080 lines of resolution, but your Blu-Ray player produces this image quality consistently, because it is a dedicated source with a dedicated “pipeline” connecting the source with the display. The cable system begins with the same level of image quality, but then compresses the image in order to send it through the cable system (along with the other 250+ channels with infomercials, car racing, wrestling, sports you have never heard of, and home improvement shows).

Video compression involves taking any redundant information within the signal, and throwing it out in order to better use their bandwidth. Other compression methods use the philosophy of taking parts of the picture that will be missed least (the outer edges, for instance), and removing them. Some cable systems routinely compress more than others, and the more compression that is applied, the worse the picture appears.

However, the convenience factor often trumps quality for many—the ability to use on demand technology to view movies without ever having to wait, go out in the rain/sleet/snow/darkness/sunlight, and not having to worry about losing DVDs, late fees, or other accountability issues. It becomes a trade-off between quality and convenience—many opt for the convenience. Several media sites have noted that DVD sales and the sale of DVD players are both declining. Ironically, cable subscriptions are also declining, with cable providers having to compete for customers’ loyalty. The only area that is stable and growing is Internet service providers.

More Americans are watching television on their computers or portable devices, while cutting back on their cable TV services, opting out of pay channels for basic service or passing on service altogether in favor of Netflix and website TV. This is hurting the cable companies’ bottom line, and they are taking notice. Because they are being undercut by a competitor who uses their bandwidth to deliver its competing product, the cable companies are starting to charge surcharges for high-data-rate users (many of whom are downloading movies). Technology is providing more and more new options to the traditional cable package, and so the progress marches on. Television as we knew it is gone, and the future looks great (although there is still nothing on when I sit down to watch).

References

Laureate Education, Inc. (Producer). (2008). Increasing returns, featuring Dr. David Thornburg.

Netflix vs. Cable (2010). Accessed at http://suite1102.com/?tag=netflix-vs-cable

Wikipedia (2010). Display Resolution. Accessed at http://en.wikipedia.org/wiki/Display_resolution

Tuesday, October 26, 2010

Module 4: Second Life as a Disruptive Technology

Second Life as a Disruptive Technology in Education and Training

I would like to introduce you to my avatar, Snowball Svoboda.

She represents me and my involvement in Second Life, where I have been a resident for three years. I see the virtual world through her eyes, and together we explore educational opportunities and new learning situations. We are members of the Second Life Educators and Second Life Researchers communities, as well as Real Life Educators in Second Life. Those groups alone can keep an avatar and her person busy most days. There are lectures, discussions, courses, symposia, concerts, and exhibits going on all the time in-world, so I have to pick and choose the events that are most useful for me, because I could easily become so involved that I would neglect my First Life obligations. Unfortunately, Second Life has not yet evolved to the point that I can send my avatar to an event and have her take notes and report back to me. That could potentially be the beginning of “virtual cloning”!

In the workplace, Second Life is becoming popular as a training and collaboration space, especially in companies that are geographically distributed and where meetings are conducted via teleconference systems. People attend these meetings, but their value is limited because there is no real engagement. Many attendees start checking their email, planning grocery lists, and other non-related actions. However, as noted by Chuck Hamilton, IBM’s new media leader for the 3-D internet team, “You can come to the meeting as a fish”(Gronstedt, 2008)--you can be creative, and as you express your creativity, you become more engaged in what you are doing. Additionally, you now have a (virtual) physical presence in the room, which also leads to more participation and direct involvement, and all without the need to leave one's office or home, fight traffic, or incur travel expenses.

Second Life, for those companies and organizations that have made the leap, has become a disruptive technology in a most constructive interpretation of disruptive: Second Life affords the opportunity to treat everyone in a large geographically diverse organization to equal access to events sponsored by the parent company. The Chief Learning Officer at Computer Sciences Corporation (CSC) championed the use of Second Life in conjunction with CSC's 50th anniversary celebration--the training organization sponsored a research grant that funded the development of an island (a discrete piece of property in Second Life) to be used initially for sales training, and was later co-opted to develop a virtual tour of CSC's 50 years of history. These applications were so successful that the learning team has reclaimed the island as theirs and are now using it for new hire orientation training. Because CSC has numerous branches all over the world, this arrangement guarantees that all new hires receive the same information without the company incurring travel expenses (Huntley, 2009).

As a technology, Second Life displaced stand-alone simulation programs, as well as some of the computer games using avatars. What made Second Life truly unique, however, is the fact that everything in Second Life is created by users. Linden Labs provides the raw materials (called "prims" for "primitives") that can be morphed into almost anything the builder can imagine. Avatars are created as generic characters, and each user customizes their avatar's appearance, clothing, and accessories (wings are always an interesting concept, although they are not necessary for flight in Second Life). An additional technology that has been at least partly displaced by Second Life in those places that have embraced virtual worlds is the teleconference: why sit in a room staring at a speaker (the electronic kind) when you can gather virtually?

Before I get too far down the road cheerleading for Second Life, I would be professionally remiss if I did not also address the concerns about Second Life: namely the fact that it is used by many international criminal and terrorist organizations for communications between their personnel. What is an advantage for business and education is also an advantage for the bad guys, but for different reasons: this way they can meet without being in the same location (and creating what my uniformed colleagues refer to as a "target-rich environment"). They do not have to leave their safe havens and travel, all of which potentially exposes them to the folks who are looking for them. Like most anything else, Second Life is a double-edged sword that can be used for good or for evil.

Because Second Life is only limited by the number of servers that Linden Labs can support, it is never likely to become overcrowded or overused. Also, as long as users keep coming up with new applications, events, and purposes, there will always be a demand for Second Life. I would predict that it has at least 5-10 more years before it runs out of ideas/inspiration.

As an educational tool, Second Life has acquired a large following in some very interesting places: a government agency needed to train personnel to become fluent speakers of Korean. Traditionally, this was done in a classroom, taught by a native speaker, with the standard drill and practice and rote memorization of vocabulary that many of us have come to expect. Someone at this agency got very creative, and got funding to buy an island, where they developed and built a traditional Korean village. Instructors developed avatars in traditional Korean garb, and student avatars were dressed in school uniforms and were expected to follow the protocols of school in Korea, including standing in respect when the instructor enters the room.

Here's where it gets interesting, though: because this became an exercise in total immersion is both language and culture, students were becoming proficient in spoken and written Korean much more quickly than their traditionally-trained peers. They performed better on standardized tests, and impressed the instructors, all of whom had taught the traditional classes previously. How did this happen, you are asking? The actual research has not been published for many reasons, but mostly because this agency does not publish for the benefit of the rest of the world. However, those of us who were in the room when this information was briefed to us developed a theory which could be worth pursuing: adult learners tend to be a bit risk-averse in a classroom. If they are not absolutely certain of the right answer, they will just sit quietly and try to become invisible. However, we also know from prior research that adults learn best by making mistakes and examining the mistakes of others. Therefore, the instructor needs to get someone to give an answer in the first place. The Korean instructors noticed that in the virtual classroom, students/avatars were much more willing to respond to questions and provide answers and input. This gets into some deep psychology (see Dr. Nick Yee's dissertation for much more detail on this concept, called the Proteus Effect), but because students feel that the mistakes are being made by their avatars, and not by them, it was safe to take the risks. Students would learn from the mistakes of the avatars and move on.

Other social benefits of Second Life echo the benefits of the internet in general: the ability to connect people from a wide geographic span to collaborate on a variety of projects and activities. For those interested in architecture and design, several years ago a Non-Government Organization (NGO) doing relief work in Nepal needed to build a new clinic. They could not afford a traditional architect's fees, so someone floated the idea of a contest in Second Life. This took on a life of its own, and a self-selected group of volunteers, real-life architects, students, wanna-bes, and just plain creative people pooled their expertise and ideas and, using a wiki model of collaboration, developed a design for the new clinic, which ended up winning an international design prize (Wikitechture, 2008).

For those who are physically challenged, Second Life offers a taste of what normal could be: avatars can walk, run, dance, and even fly even though the people behind them may be confined to wheelchairs. Hearing impaired people are right at home in Second Life because the use of text is quite extensive, even though a voice chat capability was recently implemented. Most of the events that Snowball and I attend are done in both modes, as computer speeds in some regions are too slow to support real-time voice. Several programs have used Second Life to teach life skills and communications manners to autistic/Asperger's patients, and there is an active group of autistic individuals who have formed a support group in Second Life. Children who have suffered traumatic circumstances find it easier to talk to a friendly avatar on the screen ("just like television only it talks back to me!"), so the virtual world becomes a tool for therapy as well.

I could go on and on, because in three years I've barely begun to see all that there is to see and do in Second Life. However, I'll wind it up here and just suggest that as Ed Tech students, we should all be at least conversant in functioning in a virtual world. It costs nothing to sign up for Second Life, and for those of us whose First Lives are sort of limited (that would be me: grad school, full-time job, and a business owner), it offers a wealth of experience that we don't even have to leave our computers to participate in. See you in-world!

References

Gronstedt, A. (2008). Be First in Second Life. Training, 29-30. Retrieved from Education Research Complete database.

Siekierka, G., Huntley, H., & Johnson, M. (2009). Learning is center stage at CSC. T+D, 63(10), 48-50. Retrieved from Academic Search Complete database.


Wikitechture on YouTube (2008). Accessed at http://www.youtube.com/watch?v=amCi90zH3VI&NR=1

Yee, N.. The Proteus Effect: Modification of social behaviors via transformations of digital self-representation. Ph.D. dissertation, Stanford University, United States -- California. Retrieved October 28, 2010, from Dissertations & Theses: Full Text.(Publication No. AAT 3267669).



Thursday, October 14, 2010

Module 3 Posting: Surveying and Cartography Revived by GoogleEarth and GPS

One of the traditional technologies that have been recalled and revived by current technological breakthroughs is the art and science of cartography, the study and science of making maps (Wikipedia, 2010). Cartography is the documentation of the results of surveying, which establishes 3-dimensional points on the Earth's surface and the angles and distances between them, and then uses those to develop maps and boundaries for ownership or governmental purposes (Wikipedia, 2010).

Surveying has a long and storied history with its roots in ancient Egypt and ancient Europe (the placement of the stones of Stonehenge is the result of surveying techniques). George Washington was a practicing surveyor early in his career before he assumed the role of military commander and first president. As the explorers went west, they surveyed as they went, so as to be sure to claim territory for the United States. In Europe, England and France deployed surveyors as they expanded their respective empires through colonization. The British, in particular, surveyed meticulously, and their original surveys of the Himalayas had been the only surveys on record prior to this wave of new technologies.

The advent and open availability of both satellite imagery and Global Positioning System (GPS) technology has made re-mapping of basically everywhere on the planet, in some cases just because we can, and in others because there have sometimes been questions and disputes regarding the accuracy of the surveys on record.


I cite the availability, rather than the development, of these technologies, as both of these technologies have been in extensive use by the U.S. Department of Defense (and defense organizations of other nations as well) for a number of years. We used to use the term "national technical means" to refer to satellite imagery, as only governments had the resources to field orbiting satellite photo collection systems. Once commercially-available imagery was introduced, more and better-quality imagery has become available to anyone with an internet connection and a credit card.

When GPS was introduced as part of a technology transfer initiative, it was seen as an amusing curiosity (Wikipedia, 2010). However, it was quickly noted that this capability could render invalidnt many traditional land surveys, some of which had been done more than 200 years ago. A new generation of surveyors is working at computer terminals to correct these legacy surveys.

Fun Fact Story: When my husband and I purchased the land we now live on in 1984, a title search was done, which yielded several centuries of survey records, including the original survey done in 1632, when the property was part of the original land grant to George Calvert, Lord Baltimore, from King Charles I of England, as well as subsequent surveys done when the parcel that included our land was part of a gift to Charles Carroll, the Maryland signer of the Declaration of Independence from George Calvert, Lord Baltimore, on the occasion of his marriage to Molly Darnall in 1768.

Our real estate attorney who did the title search was duly impressed. We have gone over the original survey marker points with a GPS, and the coordinates that they listed are fairly accurate: most are within a tolerance of 12 inches.

References

Google (Google Earth). (2010). http://www.google.com/earth/index.html

Wikipedia (Cartography). (2010). http://en.wikipedia.org/wiki/Cartography

Wikipedia (GPS). (2010). http://en.wikipedia.org/wiki/Global_Positioning_System

Wikipedia (Satellite Imagery). (2010). http://en.wikipedia.org/wiki/Satellite_imagery

Wikipedia (Surveying). (2010). http://en.wikipedia.org/wiki/Surveying

Thursday, October 7, 2010

Tetrad for the Interactive/Electronic White Board

Tetrad for the electronic white board

Enhances

  • Classroom clutter problems (equipment on carts and tables)
  • Student immersion in the topic at hand

Reverses/Recalls

  • Marking on diagrams (you cannot do that on a projection screen…more than once)
  • Makes a classroom pointer cool again (especially the kind with the pointing finger on the end)

Obsoletes

  • Chalk boards
  • Many flip charts
  • Projector screens
  • Classroom televisions
  • Trays and cans full of dead whiteboard markers
  • The need to keep a stock of erasable markers, as well as water-soluble ones for paper that don’t stink!

Sets the stage for:

  • Thin touch screen displays that are built into the walls
  • Reactive paint/wall covering

OK—I’m a little behind the power curve (pun intended) due to some unexpected flooding that took out my power and cable and left us trapped on our property last week (10 inches of rain in 2 days is a VERY big deal!). Happily, most of the area is drying out, the utility companies have gotten their services back on, and I have a renewed respect for infrastructure, which is rather essential for technology to work but we don’t usually notice it until it goes away!

My Learning Community has selected the electronic white board as our emerging technology of choice, and since I use them at work in our classrooms, I found this to be a perfectly interesting option to examine in this exercise.

As you can see in the tetrad above, I feel that the electronic boards (are they all considered SmartBoards, or is that a registered trademark that is becoming mainstream like Kleenex and Xerox?) add value and enhance instruction and learning in several ways:

They improve the clutter situation that existed previously in many of our training classrooms, What with TVs on carts, cables strewn about the floor, projectors sitting on tables in the middle of the aisle, and an overhead projector that was ALWAYS in the way no matter where you moved it, rooms with technology got to be very full of Stuff, all of which was considered important and necessary.

They provide better tools for learning and teaching, with the capability to project, write, type, display, and otherwise integrate a variety of media onto a single screen at the same time. No switching between sources needed!

The interactive white boards “obsolete” a host of things, many of which will not be missed:

  • Chalkboards and their accompanying mess and the possibility of fingernails across them

  • Many of our classes train with flip charts—you can cut down the use of paper resources and the need for those nasty flip chart easels that fall on you when you turn your back (been there and had it happen)—good riddance!

  • The need for a separate projection screen, which either blocked the white board or was up when you needed it down, but always was wrong no matter what. The demise of the projection screen makes the job of a facilitator easier, because you no longer need to keep an eye on the presenter who is trying to make a point to a projected powerpoint slide using a marker, snatching it from their hand before they mark on the projection screen! (been there…)

  • No whiteboards means no vast collection of whiteboard markers, most of which have expired and dried beyond an hope of use.

  • The interactive whiteboards also eliminate the need to keep two sets of markers in the classroom, only one of which may be used to write on the whiteboard. We can now maintain a supply of the water-soluble ones that smell like fruit for use on flip charts—erasable markers are OUT!

Practices that interactive white boards recall:

  • The ability to mark on the media being shown (takes us back to overhead projector days!), while not having to worry about a guest speaker marking on the projector screen by mistake.

  • Interactive whiteboards have made pointers useful and cool again (especially the ones that SmartBoard gives away with the pointy finger on the end).

I believe that interactive white boards set the stage for several future concepts:

  • Wall-size screen areas that are touch-screen and incorporate all the input and connectivity capabilities.

  • The generation beyond this is reactive paint or wall covering that, when a signal is connected to it, serves as an interactive display—this would be a practical use for electronic paper!

Friday, September 17, 2010

Obsolete Technology--Super-VHS Tape

Super VHS (S-VHS) tape introduced in 1987 by JVC Corporation of Japan. It was marketed as an improvement to the existing VHS standard, capable of displaying recorded material at a vertical resolution of 420 lines. This was the result of an improved magnetic tape oxide coating that recorded a higher-quality luminance signal to the tape, which was the same size as a regular VHS tape. The recording format for S-VHS was viewable on S-VHS equipment, as well as on VCRs with a special added feature that enabled S-VHS playback. However, the same tape could be recorded upon in VHS mode, and played back in any VCR, so the media itself was backwards-compatible. VHS tapes could be played back on S-VHS equipment--however, the image quality still looked like VHS.

This technology innovation yielded a 60% improvement in image quality over VHS, which could reproduce images at 240 lines under optimum recording/playback conditions. It competed favorably with the analog laser video disc in terms of picture quality, although it still lacked such capabilities as freeze-framing (without damaging the tape) and searchable chapters.

As the NTSC television standard displays of the time were capable of 525 lines of vertical resolution (and most broadcast signals were sent out at about 330 lines of resolution), this improvement was initially well-received. VHS had created the ability to "time-shift" broadcasts to fit the consumer's schedule, and the ability to record and play back one's favorite shows had made the VHS video-cassette recorder a standard part of many households.

However. analog laser disc technology was beginning to emerge (and quickly morphed into DVD technology), VHS had won the "Format War" with Sony's Betamax format, and consumers were ambivalent about upgrading their VHS technology for an improvement that was better, but not compellingly so. Sony took sufficient notice of this technology to introduce a potential competitor, ED-Beta, which delivered slightly improved video quality and actually competed with their own professional Betacam format. However, when consumer acceptance didn't happen, Sony quickly discontinued their competition and a format war 2.0 was avoided.

S-VHS recording got limited acceptance and use in the consumer market. In the professional market, it had a slightly longer run, as the tape was not excessively expensive, and the recording quality allowed for the capture of images that could be edited and copied onto VHS tapes at a higher resolution than VHS was capable of displaying, giving the best possible video image. The tape did not lend itself to reuse, as its condition would deteriorate quickly once it had been re-recorded over several times. However, TV and news organizations used S-VHS as a medium for fast and cheap acquisition and editing purposes until digital tape became affordable and available.

S-VHS recording was replaced by digital video standards such as D-VHS, DV, Digital S (D-9), Digi-Beta, Beta SX, DVCAM, DVCPRO, and DVCPRO-50. All of these recording media used a digital encoding system of recording, and would reproduce a standard-definition image that ranged between 450-850 lines of resolution. Beyond that point is considered high-definition, and the modes and technologies change substantially.

Reference

http://en.wikipedia.org/wiki/S-VHS




Sunday, September 12, 2010

Currently Under Construction!

This blog, like the topics posted upon it, is an emerging technology and a developing work of art. Stay tuned for future posts and more content as the academic quarter continues!