Carnival of Distractions 8

April 21, 2012 Category: American Culture

Escapism Gone Haywire

 

When I was young, I loved Nintendo.  I grew up playing Super Mario Bros 1 – 3 / Super Mario World 1 and 2, Castlevania 1 – 4 / Dracula X, Mega Man 1 – 7 / X 1 – 3, Ninja Gaiden 1 and 2, Blaster Master, Kid Icarus, Duck Tales, Strider, Rygar, Bionic Commando, and Metroid / Super Metroid.  These games all used the side-view, scrolling format (a.k.a. “platform” games).  The point, back then, was not to live vicariously through the character (pretending to be him/her); but rather to control a character that was clearly dilineated as not oneself.

Even with that earlier technology, I was able to achieve full immersion.  That is to say, I was able to parlay the (relatively unsophisticated) experience into a full-fledged, escapist excursion—a journey into a magical land for an hour (or six).  With the above titles, I could plunge head-first into a realm of fantasy and fascination.  In each “level”, I could explore worlds that were enchanting and enthralling.  I would simply allow my hyperactive imagination to fill in what the pixels on the screen could not.

That “imagination gap” was the key.  It’s why books are often better (for the mind) than movie adaptations.  Books force our imagination to bridge that gap…while movies tend to do all the imaginative work for us.  Here is the crucial difference between active and re-active imagination: one is creative; the other is passive.  The trick for game designers, then, is to do just enough, but not too much.

I also played The Legend of Zelda 1, 2, and A Link To The Past (overview) as well as the ultra-minimalist Solomon’s Key (single-screen side-view)—games that are unsurpassed in the genius of their simplicity.  With each, I was able to become fully absorbed.  I had no problem “losing myself” in the game…within limits.  None of these games would have been made significantly better by superior graphics.  It didn’t take 21st century technology to create a world full of wonder and excitement.

That was roughly 1987-95—what I would call the Golden Age of video games.  It encompassed the 3rd and 4th Generations of gaming consoles.  This period included the original Nintendo during its heyday, followed by the SNES (and Sega Genesis).  (Pre-87 could be considered the “early era” of domestic video gaming: the first two generations of the technology, which ended upon the release of Nintendo.)

            Times have changed.  We’re now in a new era of video gaming: the 7th Generation (c. 2011).  This corresponds to a new era of American culture: an era with different demands and different expectations.  The current video-gaming ecosystem has only added to the Carnival of Distractions in which we find ourselves.  Allow me to explain.

 

THE NEW PARADIGM: A MIXED BLESSING

Technology has enabled gaming systems to create a pseudo-3-D experience (one’s character seems to move INTO the screen as opposed to merely ALONG the screen) using a “first-person p.o.v.” configuration.  (This is generally a matter of wielding a weapon to kill oncoming enemies—each seeming to come at the screen.  Wolfenstein—for PC—was the first foray into this paradigm, followed by Doom.)  The point here was to emulate a proto-virtual reality experience for the game-player, rather than merely offering captivating images on the screen.  Moreover, the aim became character-as-avatar: a vehicle for the user pretending he is the protagonist in the world rendered by the game.

Here, I include the quasi-first-person variations (a.k.a. “over-the-shoulder”)—a vantage point that admittedly blends with a third-person perspective.  What I am NOT talking about here are purely third-person formats (e.g. third person “action” RPGs like Dragon Warrior / Dragon Quest, God of War, Diablo, Star Ocean, Neverwinter Nights, and Dungeon Siege; “open world” or “sandbox” games like Fallout and Prototype; or “fighting” games like Mortal Combat and Soulcalibur).  Indeed, action-adventure games like the (amazing) “Devil May Cry” series are not part of the present critique, as they are third person.

The amazing renderings afforded by the new technology are captivating for reasons far different than were the renderings of 3rd and 4th Generation video game technology.  Now the player is the character, and is thus visually “inside” the virtual world.  Moreover, the virtual world is typically comprised of environs made to look as “realistic” as possible.

This (newer) approach takes two general forms—which often overlap:

·      The shoot-em-up variety (the FPS / TPS genre): typically hyper-militaristic, sci-fi-oriented, and set in a menacing dystopian future.

·      The narrative-based variety (the RPG genre): typically character-driven and fantasy-oriented (generally over-the-shoulder vantage point).

Examples of the former include: Doom, Quake, Rage, Halo, Turok, Crysis, Dead Space, Bioshock, Singularity, Xenogears, Tomb Raider, Resident Evil, Killzone, Mass Effect, Gears of War, Half-Life, Time Crisis, Resistance, Red Faction, The Darkness, and Unreal Tournament.  Other militaristic games are based more on “real world” scenarios: Golden Eye, Rainbow Six, Conflict, Ghost Recon, Alpha Protocol, Counter-Strike, Splinter Cell, Sniper, Call of Duty, Bodycount, Far Cry, Medal of Honor, Battlefield, Metal Gear, Max Payne, Homefront, Fear, SoCom, and—set in the old West—Red Dead Redemption.  Some games (notably, Mass Effect and Metal Gear Solid) are essentially animated movies that offer the viewer isolated opportunities to control the character.  (Here, one has the opportunity to not only live vicariously through the hero in the movie, but to PERFORM IN the movie…so as to feel as though one can take some of the credit for the heroism.)

Examples of the latter include: Fable, Final Fantasy, The Elder Scrolls, The Witcher, Xenoblade Chronicles, as well as several descendents of Legend of Zelda and Prince of Persia.  Games like Assassins Creed and Hitman are over-the-shoulder “sneaker” or stealth games, which essentially serve as first-person immersion (the player as protagonist in the adventure).  Spliner Cell and Metal Gear Solid provide a military theme to this format.

These two genres have come to exist on a spectrum.  In other words, many games are hybrids of the two (e.g. Deus Ex, Dark Souls).  Thus, most RPGs have an FPS element, and vice versa.  Note that various MMO (massively multi-player, on-line) versions of some RPGs have also been developed.

Keep in mind that each one of the above (52) titles is a series—usually with at least three iterations, sometimes many more.  There is so much to choose from, it is overwhelming…and very, very time-consuming.  As with previous generations, these video games are like crack cocaine: intoxicating and extremely addictive.  (Supply meets demand, ergo the bonanza of sequels.)

To reiterate, not included in the critique are RTS games (e.g. StarCraft and World of Warcraft) and general action-adventure games (e.g. Devil May Cry, Dishonored, Darksiders, Blades of Time, Uncharted, Jack & Daxter, Ratchet & Clank, Sly Cooper, Ico, Sphinx, as well as the descendents of Super Mario Bros., Metroid, Castlevania, and Ninja Gaiden), which are based on a quasi-third-person perspective.  Alas, largely gone is the “platformer” variety, a format the industry has deemed antiquated.  After all, the “real world” is not a 2-D side-view of the world inexorably scrolling by.  Full immersion requires more of an “open world” feel, conferred by so-called “sandbox” videogames like Grand Theft Auto and Just Cause (in which the avatar can go anywhere at any time).  Certain action games–traditionally based on a linear sequence of “levels” and respective “bosses”–have incorporated this “open world” element (e.g. Devil May Cry).

The 21st century version of video games has given gamers a whole new level of video (and audio) rendering to bedazzle the senses.  But there’s a catch: With this format, almost no CREATIVE imagination is required.  Why not?  Most of the (creative) “work” is done FOR the player BY the machine.  The technology transplants the onus that used to be on the player’s imagination—thereby allowing the (creative) imagination to idle—and, the case could be made, to atrophy.  In other words: There is no more “imagination gap” for us to bridge; the technology does almost everything for us.  At first, this would seem to be a good thing.  Yet on further reflection, we should note its drawbacks.

(A similar phenomenon has happened pursuant to the advent of CGI in cinema.  Many productions get carried away with the amazing new technology…and end up being CGI-ed to death.  The result: all special effects, no substance.  Technology should not automatically entail mindlessness.  More limited technology is sometimes a blessing, as it forces everyone to be more creative (both the creators and the audience).  Call this the “Jim Henson” effect.  Compare Star Wars Episodes 4-6 with 1-3 for an illustration of the demise of this effect.

The new mode of full-immersion often becomes so engrossing as to distract players from the real world…even after game-play (temporarily) ends.  There is a difference, we find, between delimited escapism and an activity that enables (or even encourages) players to become completely disconnected from Reality…even when they’re not actively engaged in the activity.  Once a person becomes acclimated to having all the creative work done FOR him, he may become incapacitated in other areas of life.

Note: That’s one of the problems with religion.

When a person becomes more interested in a virtual world than the real world, dysfunction invariably ensues.  When this happens to large swaths of the population (as is happening with epidemic obsessions with Reality TV, Facebook, and RPGs), it is a recipe for disaster.  (Note that tens of millions of Americans are intimately familiar with the cast of characters in their virtual world of choice…yet don’t know the first thing about major political issues.)

Dedicated gamers often come to obsess over certain games—call it “Hyper-Escapism Syndrome” (HES).  In other words, they come to only care about a particular virtual world—thereby becoming habitually oblivious to things that actually matter in the real world.  Such a proclivity was never a problem with, say, Pac Man, Pitfall Harry, and Donkey Kong (2nd Generation games).  If as much time, care, and cognitive energy were devoted to the world’s (actual) problems, world poverty could be eradicated within the year.

Of course, HES doesn’t afflict every video game player.  Certainly, there are plenty of dedicated gamers who are well grounded in Reality.  (As with most D&D players, many gamers are perfectly capable of demarcating their fantasy world from the real world.)  My concern pertains to those who succumb to HES.  (Whether that’s the majority or the minority is unclear.  Either way, my contention is that HES is becoming ever more prevalent—in tandem with the change in technology.)

 

THE PARADIGM SHIFT: A BRIEF REVIEW:

The new paradigm was initiated by the release of the Sega Saturn and Sony PlayStation in 1995.  That pivotal development ushered in the 5th Generation of gaming consoles—which also included the Nintendo 64 (late ’96).  It was then that I knew we’d entered a new paradigm of gaming.  Most telling was the transformation of Nintendo’s two flagship games: the Super Mario series and the Legend of Zelda series.  The new system had transitioned to the “over-the-shoulder” 3-D rendering for both series (Super Mario 64 and Ocarina of Time).  The paradigm shift was refreshing for some, jolting for others.  (Ocarina of Time would be followed by Majora’s Mask).

The 6th Generation was inaugurated in late 1999 by the (ill-fated) Sega Dreamcast.  The PS2 followed the next year (late 2000).  The Nintendo GameCube and Microsoft’s Xbox would be released in late 2001.  With these consoles, the new paradigm was advanced yet again.  (In 2002, Nintendo’s flagship series were taken to their next stage of evolution: Super Mario Sunshine and Wind Waker.)  The key development for this generation was the integration of on-line capabilities, as multi-player, on-line gaming (and even MMO) was becoming increasingly popular.

Beginning at the end of 2005, we came into what is the (current) 7th Generation of gaming consoles: with the Xbox 360.  This was followed the next year by the PS3 and Nintendo Wii (at the end of 2006).  Nintendo’s flagship series saw further advances in the new paradigm: Super Mario Galaxy 1 and 2; Twilight Princess and Skyward Sword.  MMO capabilities were enhanced yet again, as the consoles became seamless with on-line gaming (now more and more standard).  Meanwhile, motion sensors were incorporated so that the player’s avatar would react to the movement of the hand-held controller (e.g. via the Sony “Move” and the MS “Kinect” add-ons)—a feature pioneered by the Wii.

Not coincidentally, the 7th Generation coincides with the social networking craze: the “Facebook World” I discuss in my series, “Welcome To A Facebook World”.  We are now in an era of texting, Twitter, Reality TV, blog-posts, Facebook updates, and smart-phones—an era that is very different from the days of the NES.  It is a world in which myriad social media technology has transformed the way we think, communicate, and behave…not always for the better.

Bottom line: Our attention is being consumed by—instead of freed up by—much of the new technology.  In other words, much of the new technology is keeping us chronically distracted—occupying our minds instead of liberating them.  Consequently, our attention spans have drastically diminished since about 2006, when what I’ve dubbed the “Facebook World” was inaugurated.  As a result of these new cultural norms, the capacity to engage in critical reflection has been severely handicapped.  Meanwhile, people demand to be incessantly hyper-stimulated like never before.

With all of the new gaming technology, something magical has been lost—a magic that can only be realized by an active mind, not by fancy gadgetry.  Put another way, the newest technology has become a cognitive prosthetic—thus stunting users’ capacity to (autonomously) think creatively.  As has happened with other technologies, users’ minds have become lazy, their thinking reactive—dependent on the prosthetics with which they’ve become infatuated. 

Technology, we should remind ourselves, is good when it enhances our capacities, not when it engenders mental lethargy and dependency.  Some of the new technology has, of course, been a good thing.  Yet much of it has only served to render us hyper-stimulated, passive-minded, amusement-addicts. 

As things now stand, if we don’t have sufficiently bedazzling graphics and relentless hyper-stimulation, we typically lose interest.  Patient reflection / deliberation has become anathema in a country where tens of millions would rather play Grand Theft Auto than read Immanuel Kant.

Let’s be clear: This is not to say that these newer video games are not stupendously impressive.  They are.  Indeed, a tremendous amount of thought goes into the narratives, background stories, character developments, and plot-points of many of these highly complex, virtual worlds.  The graphics are almost always jaw-dropping.  The point is that the turbo-charged escapism being offered by these 5th, 6th, and now 7th Generation games begets dysfunction in a way that Dig-Dug, Frogger, and Q*bert did not.  (2nd Generation gaming had not achieved full immersion, so could not engross us to such significant degrees for long periods of time.)

Atari and the Commodore 64 clearly were not enough.  Yet the PS3 and Xbox 360 seem to be too much.  It appears that the 3rd and 4th Generations had achieved the ideal medium: a marginally sophisticated rendering…while leaving the rest of the “work” for our own minds to do.  Think about it: Nobody ever played Super Mario Bros. 3 and came away disappointed, saying, “If only the graphics had been better.”

I’d take Super Mario Bros. 3 over Halo 3 any day.  But many of those who came of age much after 1995 would look at Super Mario Bros. 3 and scoff.  Can we account for this discrepancy simply by claiming that I’m stuck in some sort of antiquated mindset?  Perhaps I’m afflicted with a pathetic nostalgia.  Or is there something else going on here?

 

HYPER-ESCAPISM SYNDROME:

            The Golden Era is far behind us—as is demonstrated by the new expectations, tastes, and interests of the new paradigm.  With the original Super Mario Bros. or Mega Man series, we didn’t want the items on the screen to look more “realistic”; that wasn’t the point.  After all, it was the cartoon-ish (often silly) aesthetic that imbued the game with charm.  Now, the entire point is hyper-realism.  But why?

It seems that those afflicted with HES aren’t so much playing the game to (temporarily) escape Reality, but are doing so in order to REPLACE Reality.  The first-person p.o.v. is ideal for this because it allows the player to live vicariously through the character.  What the real world can’t give him, the immaculately-rendered fantasy land CAN.  Sick of all the bullshit at your job?  No worries.  Come home and delve into Final Fantasy XIV, where you are a warrior in Eorzea.  (Or, if you really want to vent, you can pop in Doom: Resurrection.)

One never thought of Mario or Mega Man AS ONESELF.  Now, the character is you—and you are the character.  Typically, one’s avatar is not only amazingly strong and agile, he / she is equipped with a phenomenal weapon and/or super-human abilities.  The hyper-realism allows this illusion to work.

When playing Super Mario Bros., one didn’t need every action to look “realistic”.  In fact, being TOO realistic would have ruined the charm of the game.  Now, we yearn to be convinced that what’s happening in the game is really happening to us—lest the game not be up to snuff.  The criteria for what makes a video game great have shifted drastically—in keeping with the overall cultural shift.

            There has always been satisfaction derived from “beating” a level (usually in the form of defeating a “boss” of some kind).  That satisfaction has carried over to the new paradigm…yet the achievement now involves an element of narcissism that did not exist before 1995.  These days, games are designed in such a way so as to make the experience “all about me”.  (This parallels the mentality involved with much of the recent social media—which is largely narcissism-based.  In this new cultural milieu, everything is seen and done in terms of “me”.)

With the current games, I get to pretend that it is ME who is doing all of these amazing things.  This narcissism-based satisfaction wasn’t a factor when one was playing Duck Tales.  To be blunt: With earlier generations of video game technology, achieving a feat was not primarily narcissistic.  The kind of satisfaction one derived from completing another level in, say, Castlevania was not the same as the kind of satisfaction derived from kicking everyone’s ass in, say, Halo.  This difference is telling, as it reflects different psychological profiles.

            In the late 80’s, there was no yearning to live vicariously through Luigi.  Maybe we wanted to be Samus during flights of fancy, but we did not actually see ourselves AS Samus during game-play.  Samus—like Mario or Luigi—was our proxy, not our alias.  Now, much of the point of the gaming experience is to live vicariously through the character.  The protagonist of the excursion is our avatar, not just a character on the screen.

Note the difference in the perception of conquest.  “Conquering” the game used to be about beating the game itself.  Now, the game is simply a means to another end: I conquer what is IN the game.  The point, after all, is to see oneself IN the game.  This is closer to “The Matrix” than to manipulating a proxy on a video screen. 

When one used to say, “I conquered Mega Man 4,” one meant that one conquered THE GAME.  (It was, of course, Mega Man who LITERALLY beat the bosses, because HE was the one actually “in” the fictional world.)  Now, if one says, “I conquered Halo 4”, one means that one PERSONALLY conquered the things in the game…as if the fictional events really happened.

This may seem to be a trivial distinction, but it reflects an alteration in the American psyche—a cultural change that indicates widespread dysfunction.  After all, video game producers are a business, and ultimately cater to the demands in the marketplace.

            In sum: The change in video-gaming seems consummate with certain changes in mass psychology.  In playing a video game, we went from a departure from Reality to an emulation of (a faux) Reality.  There is a reason that there are no hookers to bang in Mega Man, but plenty of hookers in bang in Grand Theft Auto.  Mega Man didn’t want to bang hookers; WE do.  The issue isn’t whether or not banging hookers is a good thing; the issue is that video games are now designed to CATER TO ME…not to tell a story about someone who has absolutely nothing to do with me.

We can put this another way: Nobody ever played Castlevania because they wanted to experience the thrill of killing zombies.  The haunted milieu was just an interesting theme, nothing more.  It said absolutely NOTHING about the player’s psychology…no more than did any of the levels’ nifty themes in Mega Man or Super Mario Bros.

During the Golden Age, there was a reason that there was no fuss over making an “injured” enemy seem “real” (i.e. in graphic detail).  Doing so would have compromised the enchantment we coveted; there was no reason whatsoever to show graphic detail.  Now, by contrast, the more graphic detail, the better.  HES demands this.  Regular escapism does not.

The shoot-em-up genre offers orgies of violence that provide an outlet for channeling frustration / aggression.  (When it came to venting anger in the 80’s, Q*bert didn’t quite cut it.)  The problem here is not—as some claim—that violent video games lead to violent behavior.  They don’t.  Rather, the problem is that extensive play can engender myriad degenerate mentalities—habituating the mind to reactive thinking, hyper-stimulation, and—with the FPS games—a very short attention span. 

There is little nuance in the FPS genre, as it caters to our growing penchant for simple-minded-ness.  Indeed, that has always been the case with “action” video games.  The crucial difference is that the hyper-realistic games of today masquerade as Reality.  Two decades ago, we never treated Zelda as a variation on the real world.  It wasn’t.  That was the point.

 

RE-ASSESSING OUR MODE OF ESCAPISM:

Remember when we only needed an up/down, left/right pad with A and B buttons?  There was an elegance to the simplicity of the interface during the Golden Era of video gaming.  It was amazing how much one could do with just forward/ backward, jump, duck, and shoot.  Meanwhile, in those halcyon days, the NON-realistic-ness of the rendering was part of the charm.

Minimalist games are timeless.  From Space Invaders and Asteroids to Pong and Tetris (1st Generation games), we can have as much fun engaging in game-play as with the most in-depth, 7th Generation blockbuster.  It is important to remind ourselves that astounding graphics and a highly-sophisticated interface is a double-edged sword; such perks aren’t the end-all-be-all of a good game.  Blaster Master did not require copious amounts of processing power to be amazing.

So am I saying that the world would be a slightly better place if everyone who has spent untold hours playing Grand Theft Auto had instead been playing Solomon’s Key?  Yes, that is what I’m saying.  After all, Solomon’s Key required something called “analytical thinking”—a skill that is steadily becoming extinct.

Since about 2006, American culture has entered a new zeitgeist—a zeitgeist symbolized, in part, by the “evolution” of video gaming.  The current (7th) Generation of video games will most likely come to an end later this year, with Nintendo’s release of the Wii U.  One can imagine what technology awaits us in the near-future.  It’s safe to say that completely interactive Virtual Reality is around the corner.  (But why would we need VR?  Isn’t Reality good enough?) 

For fulfilling jaunts of escapism, weren’t the 3rd and 4th Generation video-gaming technologies adequate?  If not, then: What is it, exactly, that we’re looking for?  And why do we “need” it?  We can only wonder what the 8th Generation of video-gaming technology might hold in store for us.

The question we should ask: What will its impact be on our minds…and on our culture?

EPILOGUE:

At the end of 2012, Nintendo released its sixth console, the “Wii U”, thereby inaugurating the EIGHTH generation of video gaming.

The original NES was technically released in 1985 (though didn’t really catch on in America until 1987).  The second console, the “Super NES”, came six years later, in 1991.  Five years after that came the “Nintendo 64” (in 1996).  Five years after that came the “Game Cube” (in 2001).  Five years after that came the “Wii” (in 2006).  It has now been six years since the Wii was released.

As timing would have it, the new paradigm in social networking / communication technology exploded during the year AFTER the original Wii was released.  (As I’ve discussed elsewhere, the key transition year was 2007.)  American (indeed, global) culture has changed drastically pursuant to that development.  The Wii U is Nintendo’s response to those seismic changes.  With this latest generation, designers aim to further integrate the gaming console with internet-facilitated media (streaming of HD movies, TV, etc.) and ESPECIALLY with social networking media.

Five years after the 2007 social media revolution, we live in what I’ve dubbed a Facebook World–a world that did not exist when Nintendo last released a console.  Nintendo is forced to keep up with the changing times.  Now, almost all formerly disparate functions are integrated: phone / Skype conversations, listening to music, gaming, social networking, watching movies and television programs, etc.  The division between “console” gaming and “on-line” gaming is dissolving.  In the eighth generation, the distinction is almost moot.

Gone are the days when the aforesaid functions were separate activities–attended to by independent tools.  Since 2007, people have become accustomed to smart-phones–and all the hand-held power attendant thereto.  And so the minimalist controller of the older gaming generations is no more.  The new “U GamePad” consolidates the management of all media functions into a single, hand-held device.  It’s essentially a modified tablet computer–replete with an operating system.

Nintendo has essentially been forced to become a computer-maker.  Now, everything happens within the “Mii-verse”.  Yes, indeed: We know for certain that narcissism has been turbo-charged and fully automated when the prime function of a video gaming system is to immerse someone in something called the ME-verse: a universe that’s all about me.

Genuine human-to-human interaction is being steadily-but-surely phased out with modern media technology.  Exhibit A: the division between social networking and playing a video game has been all-but-eliminated.  Reality and fantasy (i.e. fabricated reality) may now be melded.

All we have left to do is get fully jacked into the matrix.

What the Wii U will NOT do is encourage people to read good books or to engage in long, thoughtful conversations.  It will not foster careful deliberation or substantive discourse.  In fact, just as with blogs and Facebook, it will cause people to further ensconce themselves in a memetic cocoon that they’ve custom-made for themselves.  Like most web-surfing today, their personalized “universe” is tailored to suit their own dispositions.  (Other than the fleeting benfit conferred via in-the-moment gratification, this is not a good thing.)

As with IM-ing, texting, and Twitter, the new gaming paradigm will engender ever-shorter attention spans, hunger for augmented sensory stimulation, narcissism, and a penchant for so-called “multi-tasking”.  But no matter: all that is perfectly fine for most consumers.  Welcome to the new zeitgeist.

Until the seventh generation of video gaming consoles, whenever one was playing a video game, the moment one “paused” the game (in order to, say, go to the bathroom or to answer the telephone), one disconnected oneself from the virtual world.  One could stop the gaming experience and walk away from it… until one opted to resume playing (be it minutes later, hours later, or days later).  In the “old days”, in order to resume, one had to (more or less) disconnect from other activities and focus on the game-play.  Bottom line: All the console’s activity started and stopped at the touch of a button.

Times have changed.  Now the connection to the virtual world never really has to end.  The activities of day-to-day life have all been incorporated into the Mii-verse (i.e. Nintendo’s version of “the cloud”).  All the tools of our everyday lives have been adapted to render virtual encounters…in ALL contexts.  Consequently, game-playing can now be seamlessly integrated into a continuous, all-encompassing, virtual experience–an experience that involves most of one’s OTHER daily activities: chit-chatting with one’s friends, making on-line purchases, posting comments, putting an event into one’s schedule, updating one’s profile, etc.

With the Wii U, you can still “disconnect” when you, say, go to the bathroom.  But, then again, you’re virtual self is never completely disconnected.  Now your friend overseas is wondering why your avatar in Halo 4 isn’t going down the next corridor yet.  You’re peeing, but your Wii U will be in full operation all along.  Other members of your Mii-verse are there, waiting, elsewhere in cyberspace.  If you turn the TV off, the virtual world will persevere.  Your gaming experience is part of cyberspace now, so it never has to end.

Having a conversation with someone else used to be a separate activity from playing a video game.  Now, the two activities are different facets of a unified experience…all facilitated by one device.  This is both incredibly cool and incredibly creepy–depending on how we look at it.  If we extrapolate into the future, Nozick’s “experience machine” can’t be too far away.  (Bear in mind, our “experience machines” are all interconnected now.  So, ultimately, we’re all “jacked” into the same “net”.)

The remarkable perk of this eighth-generation gaming console is that it has the capacity to consume all our attention.  The immanent danger of the eighth generation technology is that the gadgetry will tend to consume all our attention.  When the same device can take care of social networking updates and one’s avatar’s activity, perhaps we are getting a little carried away.  But, hey.  Isn’t the point to make it seem like reality?

Or is our goal to force our reality fit virtual world?  Alas.  We will come closer to erasing that fundamental distinction in, perhaps, another five years.

CC BY-NC-ND 3.0 - 2010-2019 - masonscott.org
Developed by Malagueta/Br
Note to readers: Those reading these long-form essays will be much better-off using a larger screen (not a hand-held device) for displaying the text. Due to the length of most pieces on our site, a lap-top, desk-top, or large tablet is strongly recommended.

 

Download as PDF
x