Saturday, December 10, 2011

Carrier IQ - Wake-up call or simply an evolutionary milepost in mobile tech?

Recent media noise around Carrier IQ software once again highlights the tension between personal data tracking/analysis and its potential benefits to consumers and/or service providers.

What reinforces the sense of scare though is not necessarily the technology or the software itself but the semi-stealth nature of its deployment and implications of its abuse in the wrong hands. It is the after-the-fact discovery of such data collection that enhances our imagery of Big Brother.

For average consumers, privacy often seems to be less sacrosanct and more a commodity with a price.  Don’t we often end up signing off our privacy rights every-time we launch a phone or tablet  app to find a favorite restaurant or to check-in to our favorite social network or to navigate through our weekend drive to a different city? In fact, I was surprised, if not shocked, to get a glimpse of the extent to which the US phone companies already track and keep our everyday call logs, text messages, IP addresses surfed - all tagged with time and location stamp. 

While the bigger question of how an Orwellian government or other rogue authority might abuse such easy and detailed exposure of its citizen’s private data is an open issue that society has to eventually deal with, what probably is more relevant today is how best the business honestly neutralizes the nefarious side of personal data collection by transparent opt-ins and clear demonstration of value. In the end, in a truly open and civic society, it may just be an issue of rightly framing the Value Proposition – what or when data is collected, how it is used and what is offered to the consumer in return, i.e. what perks or rewards am I trading-in for my privacy –  a mature and honest quid-pro-quo.

The truth is that with continuous growth in cloud computing, back-office analytics, GPS-enabled location knowledge and 24/7 online connectivity, the envelope of the technology and its seeming  intrusion on our location, communication and surfing habits will only grow.  Transparency, an equitable value proposition and adequate legal protection are the three tracks on which this train must run. 

Saturday, April 23, 2011

Commoditization of Face Recognition Technology – Is there a dark cloud lurking behind?


I continue to be intrigued by the notion of a public camera looking at an individual and matching up his or her personal information for whatever purpose. So I wanted to do a little follow up on my last blog. Just like the Brazilian world cup soccer initiative, facial recognition algorithms have already been applied successfully to many public security applications. .

The question is what happens when it becomes part of everyday technology, commoditized into apps that are available universally to all.

Think of the movie Minority Report, the 2002 Steven Spielberg movie, where cameras captured and read Tom Cruise's face, and then customized ads for his character pop up. Immersive Labs, a New York startup, recently introduced its smart billboard technology that combines video analytics with multiple data sources such as Twitter or Foursquare information to select the most suitable ad to display to the consumer nearby. The software seemingly understands the geometry of faces enough to determine the gender and approximate age range of the face looking at the webcam of the billboard – not exactly the more prying technology shown in Minority Report, but certainly a step in that direction.

The advertising billboards of the future will promote a product by analyzing  the audience to display items that the viewer is more likely to buy. The concept is not new. Last year NEC Corp. demonstrated  its  interactive digital signage solution, now popular in Japan. A built-in camera captures an image of anyone looking at the signage. The system then compares it to more than 10,000 stored patterns to determine the gender and approximate age.  Finally it displays an image that is most likely to appeal to the audience.

Digital Signal Corp.  seemingly wants to move further in that direction using 3D long-range facial recognition. The Virginia based start up’s laser radar system can provide a range of biometric data which, among other things, can tell you whether someone appears to be particularly nervous or tense.

A recent article nicely summarized my emotions - the technology is both “exciting and creepy”. 

10% of digital signage by 2020 is expected to have facial identification technology to provide personalized advertisement.   The day may be close when you’ll walk into a restaurant, the host will greet you by name and suggest a menu that will appeal to your palate; or you may be greeted with a sign announcing the price and location of items you may be most interested in as you walk into a department store.

I don’t want to add an Orwellian twist to such free market innovations, but I certainly bristle at the thought of a prying billboard camera that scans  my face every time I stand in front of it and searches  through a database of my personal information to create an ad that I simply won’t be able to ignore. The nagging thought in my mind is not about the technology but about a scenario when the technology is available to the masses.  

A recent start-up Viewdle is bringing a cool little addition to your Android phone. You shoot your friends and family, it recognizes faces (after you teach it who's who), then automatically tags photos for Facebook. Perfect for Facebook users!
Your smartphones will now be able to do these simple face recognition calculations in real time, compare faces to images previously stored and identified. The company’s website features a video of five women walking towards your camera when labels pop up to identify them and post their Facebook comments in real time.

Google Goggles is an exciting little app  for object recognition.  What would be the implications of a possible mash-up facial recognition technology with Goggles?  There are some interesting use cases as reported here . You meet someone, say in a conference, that you can’t remember the name of. You simply take a quiet flash-free photo, upload it into your device and let an app find out any tags associated with the photo, essentially launching a search engine to find name and any other details available on the web. Really cool!

But what about that person in the bar shooting your photo incognito to search your profile in a web database with less honorable motives? What about that stalker who can secretly take a photo in the street and then pull up all your information tagged to your pictures in cyberspace?

I am sure a company like Google, with its demonstrated effort to do public good, intends to make sure there is no abuse of people’s privacy from the use of its tools. But things go wrong, as evidenced in Google’s settlement with FTC over released email contacts associated with the launch of Google Buzz.  In a similar vein, Pandora internet radio was served with a subpoena as part of a Federal grand jury investigation of personal data collection through its popular Android and Iphone app. Google also got into trouble for seemingly collecting private information with its Street View data collection.

Clearly leaks happen when it comes to personal information – with or without malicious intent.
  
Recent reports that iPhone and iPad are regularly recording the device position and storing it in a hidden unencrypted file will do little to assuage consumers’ concerns about their personal data being used or potentially abused.  A similar situation exists for Android phones as well. Technically, these are all opt-in, yet the story followed in WSJ indicates that the data contained a unique identifier tied to an individual’s phone (but not to the user’s name) and is not totally anonymous.

Don’t get me wrong. I am sure all the kinks will be ironed out and  all these possibilities will eventually be more about convenience and quality of life and less about surveillance or loss of personal data.  As I indicated in an earlier blog, technology after all has overwhelmingly been the biggest driver of human progress.  Despite the nuclear disasters in Three Mile Island (US -1979), Chernobyl (Ukraine - 1986) and Fukushima Daiichi (March 2011),  today over 440 nuclear power plants worldwide are running safely to generate 14% of global  electricity. The benefits of nuclear fission technology decisively out-weighs the pitfalls. 

So, I hope, will be the case with all the commercial recognition techniques. 

Tuesday, April 19, 2011

RoboCop, Facial Recognition and the Leap of Innovation from Fantasy to Reality


Last week, road testing of a high tech mobile facial recognition system by Brazil police evoked the memories of the 1987 movie Robocop from journalists and bloggers all over the globe. The parallel to the high-tech helmet worn by Robocop Alex Murphy was unavoidable. The system the Brazilian police are planning to use during the 2014 World cup soccer tournament combines a high power camera equipped with face recognition technology connected via wireless link to a back-office computer.  Face recognition algorithms are being perfected for many years now, have been used by police, airport security systems and many other surveillance systems. The Brazilian application is an interesting mobile version of a similar system. The police will have  sunglasses with cameras that can scan 400 faces per second. The camera has a wireless link to a database that can cross-check 46,000 points on the face with 13 million mug shots of known criminals to find a match.  


Physicist and futurologist Michio Kaku in a recent CNN interview suggests that we take cues from Sci-Fi to get a sense of the future.  “Physics makes science-fiction happen” was his prescient comment as he sprinkled examples from Matrix, Predator, Blade Runner and Star Trek to paint a notion of the world twenty, fifty or hundred years from now.

He definitely has a point. Kaku’s real hero Einstein once said “Imagination is everything. It is the preview of life’s coming attractions”. The same imagination that motivates creators of sci-fi novels or movies can morph into reality in the hands of scientists and engineers.

Examples are many. While the notion of a submarine existed in the 19th century, Jules Verne in his 1870 novel “Twenty Thousand Leagues under the sea” first imagined a submarine named Nautilus  that could stay and  travel under water for many days, driven by an ultra-quiet engine that runs on processed fuel. A little under a century later, the world’s first long-distance nuclear submarine USS Nautilus made its maiden voyage in 1954. Verne’s 1865 novel “From the Earth to the Moon” presaged a close parallel of the Apollo 11 mission to the moon in 1969. 

And, long before Steve Jobs envisioned the iPad, Gene Roddenberry already created his own celluloid version in Star Trek – the Next Generation.

So next time you discount those odd gizmos in a late night Sci Fi movie, think again, right there may be an idea for the next big invention !

Saturday, April 16, 2011

Web 2.0, Social Media and Neologism


Language has always been a dynamic repository of words, expressions and metaphors. So addition of new words to the lexicon is not a novel concept. But when it comes to technology, it is not just the way we do things that is changing at a maddening speed, it is also the words we use and the lingo we speak that is morphing at an ever accelerating speed.

Even though it may make the sticklers of proper English bristle, Oxford English Dictionary (OED) just added brand new entries  like OMG, LOL and FYI that so far existed only in chat rooms and the texting world. So don’t be puzzled when they show up next in your favorite newspaper article along with expressions like BFF (Best Friends Forever), IMHO (In my humble opinion) or  TMI (Too much information). 


Words that did not exist even a decade ago now touch us like never before, when we talk about spam, blog, cloud computing, phishing, crowdsourcing, e-books, geotagging, GPS, walled garden, Netlingo.com even had a Youtube video to explain the “Walled Garden” phenomenon. So next time you and I talk, I won’t be raising my curious eye-brows as you sprinkle the conversation with  words like “scareware” (malicious computer program),  “Cyberbullying”, “Clickjacking”  (manipulating a user’s activity by concealing hyperlinks).

Social Media and the Lexical Meme:

Since the days of Myspace, social media continues to unleash a torrent of new words. In addition to defining the avian meaning (i.e., chirping), Dictionary.com now formally defines Twitter as “a website where people can post short messages”. Twitter, in its turn has spawned many new derivatives like Tweet, Tweetup, Hashtags - all included in a dedicated glossary page.

The truth is, lexically speaking, the phrase “catching up with technology” is assuming a whole new meaning. As I was writing this blog, I came across several words that seemingly are part of the everyday parlance of social media. Some are more intuitive, such as moblog (blog published from a mobile device), others are less so but perfectly meaningful like  copy-left (the legal framework to balance the flaws of copyright),  creative commons or digital story. A good reference for the un-initiated may be found in this socialbrite.org page.  

While Twitter and Facebook are household terms understood globally, you may come across a few other terms like “Digg Me” or “Disqus”, tagged below the next blog you will read. Digg is a social news site that lets people discover and share content. Users submit links and stories and the community votes them up or down. Users can “digg” stories they like or “bury” the ones they don’t.

The word I personally love most is “MASHUP”. It shows up in multiple contexts – in music, in software  development.  Mashup songs are a  popular trend that fuse two known songs in to a new composite remix. Here’s a popular one - We R Dynamite – a mashup of  Taio Cruz and Ke$ha’s Dynamite and  We R who we R. For application development, mashup techniques combine data from multiple sources to create a new integrated app. An early mashup example is the apartment hunting tool that took data from sources such as Craigslist and combined them with mapping or photo database (e.g., Google maps and Street view) to create a new app with visualizations of the data. Zillow mashed up data from other real estate web sites and combined it with Google Map or MS Virtual Earth to create real estate applications displayed to the user on a map. With standardized APIs, possibilities are limitless, when it comes to innovating new value simply by mashing up existing apps. 

There is an app for that –

 As always is the case now – no matter what subject we might be talking about, there is possibly an app for that…. There is a $0.99 iPhone app to tell you all about the new acronyms for the web. So if you’re a parent of a teen who wants to make sense of a text someone might be passing to your youngster or vice versa, here is your education


Of course there will never be a dictionary to keep a live record of all the new social code words that are popping every day. Netlingo.com has a real time compiling of such a list. To make sense of any code words in unwanted texts or spams, and to know when not to touch those mails or texts, here is a good reference for parents.  A word of caution – while many of the acronyms are innocuous, several carry inappropriate connotations. Hopefully they will help you raise the red flag when you see one. 

That's all for today. 10Q for your attention. BBFN.




Sunday, March 27, 2011

Efficient Multi-tasking – The Myth that does not seem to go away


The other day at work I walked to someone’s office for a pre-appointed meeting. I was on time, warmly welcomed and gestured to take a seat. Soon enough I realized that the speaker-phone on the desk was on and there was a live call that my host was intermittently joining and muting. Regardless, our discussion slowly started. Soon, I noticed that he was also checking his blackberry calendar from time to time. Our meeting continued with an occasional spaced-out look from my host as he tried to keep an ear to the ongoing call. Some times, he made a few disjointed comments to the unseen participants of the call, efficiently muting it back. Suddenly, distracted by some supposedly “hot” email chime, he began texting to someone. At this point, I had it. I politely suggested that I might have come at a wrong time and perhaps we could reschedule. “Oh no, no…”, he was superbly gracious. “Your time is important, so let’s continue and finish the conversation now. Don’t worry about the call or the SMS. I am a pretty good multi-tasker”….

It is time to seriously challenge the “coolness” and glory of such multi-tasking in today’s culture. Look at the well known 2009 Stanford University study by Clifford Nass involving several self-declared super-multi-tasker youngsters. These kids were at the leading edge of simultaneous IM conversation, texting, emailing, social networking while doing everything else and claimed to have full cognitive control. However, when given a series of mental tasks like puzzles and memory games, these chronic multi-taskers consistently under-performed relative to low or non-multi-taskers. Check out this PBS interview of Dr. Nass on his research. 

So what’s at play here..? The multitude of web and information technology has made multi-tasking both easy and endemic.  But every time we switch from one task to another, there is a “switching cost” of time, as we turn off one part of the brain and turn on another. Second, human brain functions by relating data, picture or information in front of us. As we switch from task to task, images or data from all other tasks tend to clog the brain, impeding any analytical effort that needs only selective data from that mess. Multi-taskers’ rationale is that they do five things at once because they don’t have time to do them one at a time. Turns out, they might be more efficient if they actually did things one after another.

Even for a computer with a single core micro-processor, multitasking involves time-sharing with only one task active at a time. Typically, these tasks are rotated through many times a second, with a miniscule but finite ‘Context switching’, time lost in between. However, with multi-core processor computers, each core can perform a separate task simultaneously.

For those more geekily inclined, there is a nice 2001 blog piece by Joel Spolsky that gives a simple example from a programming point of view to establish that:
1. even without any task switching costs, sequential processing gets you results faster on average.
2. the longer it takes to task switch, the bigger the penalty you pay for multitasking.

To elaborate the concept, let me borrow a graph from project management paradigm (cited in codinghorror.com)  from Gerald Weinberg’s “Quality Software Management –Systems Thinkingthat conceptualizes the relative time share between a software project and the wasted time in context switching as more projects are added to the workload. Be sure to check a similar blog titled Twitter Curve by George Mulhern. 
  
The point, however is NOT to avoid or shun these fantastic tools that human mind is innovating. I love Twitter, Linked-in, Facebook, texting and SMS. The point is that our comfort and expertise notwithstanding, we often take a myopic view and believe that we can text, email, phone and think at the same time without sacrificing the quality of any of these. The going paradigm often is that we can IM, watch a movie and do homework or solve problems without degrading the quality of the solving process. Overwhelmingly, studies indicate that we really can’t, at least not without degradation in terms of time, quality or ability to think effectively.

Technically speaking, it is possible that after thousands (perhaps millions) of years of such multi-tasking, the human brain might evolve itself to a multi-core processor to bridge the gap of context switching between individual tasks. (Those who are not convinced of Darwinian science of course won’t even have such hopes; the only possibility for them might be to petition the Creator to go back to the drawing board and re-design the human brain for multi-core processing).

Let me end today with another blog article by Myra White  where she mentions psychologist Csikszentmihalyi’s work to underline that some of our most meaningful and creative moments happen when we’re so absorbed in the task at hand that we become one with the task and lose track of everything else. Many of the Aha or the Eureka events and innovation happen during these very moments. 




Sunday, March 20, 2011

2011 Tsunami – Technology, Warning System and Japanese Perseverance


The double whammy of earthquake and tsunami in Northern Japan was a sobering reminder of how tiny we humans still are in the hands of the forces of nature. The live videos of furious water mowing down and engulfing buildings, trucks, cars and neighborhoods as if they were little debris seemed like a trailer from a disaster movie. The ruinous aftermath notwithstanding, I can’t help thinking what would have happened had this occurred in any other coastal or island country…like Haiti, Fiji or Madagascar.


Over the years, Japan has built perhaps the most advanced earthquake warning systemdeploying thousands of seismographic sensors across the whole country and setting up a quick broadcast system using a one-to-many version of text messaging called SMS-CB (short message service – cell broadcast). 


Now a quick segue to earthquake101…….  An earthquake typically has two types of tremors  – less destructive but faster P-waves and slightly slower but damaging shear or S-waves. The speed difference between the two, received from thousands of ground sensors, allowed Japan to assess location and severity sending warnings within seconds, not only to citizens, but also to public infrastructure to initiate emergency procedures. For instance, the residents of Tokyo, which was 373 km from the epicenter, had a valuable 80 seconds to take life-saving steps to safety. Other areas of Japan, especially Northern Japan, perhaps had shorter windows, but still a valuable few seconds to react.

Interestingly, here in California, researchers at Stanford University are pursuing a crowd-sourcing technology – Quake-Catcher Network – aggregating data from volunteer laptop accelerometers to detect quakes. 

.....The Japan disaster is also a test case to gauge the progress of tsunami prediction technology since the Asian tsunami of 2004. Clearly the series of sensors spread across the Pacific and the Indian ocean floor (http://www.ndbc.noaa.gov/) and the complicated codes to process all the data from them are paying off. The local tsunami warning in Japan came three minutes after the quake struck while the one from the regional Pacific Tsunami Warning Center came within nine minutes . As a result, residents of the hardest-hit areas in Northern Japan had 15 minutes of warning, saving many lives, yet not soon enough for tens of thousands of missing. The response time may not still be at its desired best, yet it’s a lot of progress since 2004.

Technology aside, I cannot help being amazed at how exemplary and powerful the innate head-bowing Japanese politeness could be at this devastating time as I see the pictures of Sendai residents waiting in a mile-long rainy queue outside a food store. The power of national discipline over social chaos could not have been more apparent.  The contrast indeed was striking, given the pictures of mayhem and marauding that often accompanies disaster and misery. The ongoing nuclear mess notwithstanding, this is one country that will surely rebuild once again.

Sunday, March 6, 2011

Identity Peddling – the Newborn Stepchild of Web Technology


My marketing professor in business school always said that the true market size of any consumer group is “one” – meaning every individual has unique tastes that defies bundling them into pre-designed segments. So targeting and catering to that unique person is the holy grail of all advertisers. Classified newspaper ads, mail solicitations, telemarketing have all been looking for the same ultimate prize for decades. And now the friendly web has unleashed a new secret weapon …….

Back in the nineties, it seemed cool when Netscape introduced “cookies” in their browsers so you don’t have to remember passwords, site preferences or the individual contents of your shopping cart. But back then, advertisers cared little about online marketing and we were free to roam and surf without having to worry about who’s stalking us and following our trail in cyberspace. But good old times are always short lived.

Pretty soon third party cookies began to be placed when you visited a site and, in place of helping you remember past surfing info, these text files were designed to be sent back to build a database of your browsing interests. Interestingly, law is yet to catch up and so there is no legal protection against such surveillance. Naturally there is a growing field of legit entrepreneurs that now build databases of consumers like you and me, track our online destination and behaviors and sell them to anyone willing to pay the price. These identity peddlers are increasingly fueling a potentially not-so-holy alliance between the internet and advertisers.

A quick google search revealed many incarnations of these little stealth spies that we are all exposed to… Zombie cookies, Flash cookies, Beacons and on and on. Unlike their original ancestor (the Netscape Navigator cookie), these can hold lot more data, are hard to get rid of and are often hard to find.

As one of the sites suggested, I did a little “YoutTube Test” for Flash cookies. Go to YouTube, launch a music video, check its volume setting, change it. Delete all cookies in browser control setting. Close the browser. Re-open the browser, launch the same video, check the volume setting. Notice that it retains your new setting and did not return to the default setting. That’s your Flash cookie helping you out.

I was intrigued by a recent study  that showed that more intrusive versions of these little text files are now rampant, often scanning in real time almost anything you may be doing on a web page – like clicking a link or typing a domain name - and deploying advanced analytics to estimate your location (zip code), shopping interests, income bracket, medical status, family info, age, gender, and so on. And then the data, supposedly anonymously, are sold in auctions or stock-market like exchanges.

Not surprisingly the sites most loaded with such stealth trackers are sites like online Encyclopedia Britannica’s Merriam-Webster.com or Dictionary.com that people visit to research a topic of their interest. The WSJ study for instance found that a tracking file from a healthcare ad firm Healthline Networks Inc. snoops on you when you go there and if you are researching topics such as anorexia or eating disorders, you may suddenly be seeing ads from appropriate pharmaceutical firms in your web pages !!  

Web technology has brought the world together, created social media that’s now innate to how the world operates. Good faith sharing (and occasional narcissistic exhibitionism!) is fundamental to the Facebook generation. Such mass marketization of our personal life hits directly at that openness and hence can be un-settling, especially if you have a condition you want to keep private.

So next time you surf the web, be aware that the intimacy between you and your browser page is no longer a closed-door affair. The breadcrumbs you leave are constantly picked up by scavengers who are then peddling them back to anyone willing to pay and anyone who may have something to gain by knowing your identity, habits, interests, and personal pursuits. And more importantly remember that there are no legal limits to how that data can be used.

By the way, all major browsers such as Explorer, Firefox, Chrome, Safari have been stepping up pledging to add additional protection similar to FTC’s “No Calling List” for telemarketers. But ultimately your protection is your responsibility. After all, there may be a little conflict of interest, e.g., limits set by Exploerer may have impact on Bing’s search-based ad business.

And all the social media and Facebook users may have a little extra caution to exercise. Facebook has been in the middle of several privacy breaches in recent months putting in question its promise of keeping its members’ profile data secure. Last October, several media (Wall Street JournalPC Magazine, Washington Post, The Huffington Post) reported on ten most popular apps on  Facebook were transmitting users’ IDs to other companies. While Facebook seems have to taken the corrective steps, the incident showed yet another hole through which you are constantly pried upon by many businesses, some legal and some perhaps not so legal, some benign and some perhaps not so benign. 

Just be aware.

Saturday, February 26, 2011

Technology Matters and so do the science and math behind it


I was watching a video-cast about this year’s Campus Party event in Sau Paolo – a gathering of young minds largely from computers, engineering and internet - that come together in a party-like environment to learn, share and show their innovations, new hardware, devices and applications in games, entertainment, communications and many others. What an event!

The story was not found on the front page. The front page talked about Libya, governor-union duel in Wisconsin, protests in middle-east, soaring oil price. Yet it was no less significant. Who knows if some  future Turing, Tesla, Edison, Bell, or Jobs was not hiding in that crowd. The unique gathering was all about digital technology and sharing new ideas in an entertaining atmosphere. I could not help thinking that technology (digital or otherwise) is probably the most potent force that has given us tools to continuously shape and re-shape our life since our cave days and pushed societies that incubated them to the next rung in the ladder of civilization.

Arts, business, trading, travel, sports, medicine are all examples of activities human have engaged in since the beginning of history. Throughout history, ruling classes ruled and waged wars, trading class managed business, farmers grew food, teachers taught the existing body of knowledge and so on. But it is really the scientists and technologists who quietly expanded our knowledge to improve how we do all those better, how we build, how we travel, how we understand the human body and its maladies, how we communicate, how we entertain and everything else in between. 

Pursuit of science and technology is arguably the single most powerful human endeavor that has contributed to human progress since the days we discovered fire and invented wheels. The societies that supported their inventors and scientists usually made bigger leaps and conversely, those that suppressed science regressed. The societies in Europe thrived when they mentored the likes of Da Vinci and Newton. And they regressed when scientists like Galileo were chastised because they discovered new physical laws that challenged existing beliefs whose perpetuation was deemed critical for preserving the religious or political establishments of the time.

More I think about it, science and technology have always been the key transformative force of our civilization. While philosophy, arts and morality have their rightful place in our progress, milestones of human history are often marked by aqueducts, windmills, steam engines, electricity, automobiles, transistors, robotics, genetics and of course the internet – all outcomes of physical sciences. Today the quality of our life continues to benefit from innovations that make faster chips, better computers, more efficient transportation, better surgical technology, more efficient communication and so on.

By innovation, I mean new ways of doing things. Just to clarify, while designing mortgage based derivative or other financial tricks may be considered innovation in some sense, and may benefit certain businesses in the short-term, that’s not what I am talking about.

And more often than not, pursuit of science and mathematics catalyzed all these inventions. Steam engine that pushed England in to the modern industrial economy was invented because James Watt and Joseph Black at University of Glasgow knew enough about the science of heat transfer. Steam locomotives and steam powered ships then revolutionized transportation and international trade completely reshaping life – thanks largely to the knowledge of mechanics and the science of thermodynamics and metallurgy.

While lifestyle choices and occasional pastoral edicts against drinking are definite contributors, the largest impact on increasing life expectancy from mid-forties in early twenties century to almost low eighties now come from breakthroughs in bio-technology and genetics powered by Darwinian science.
And hence, technology matters, more than ever, and so does the pursuit of science and mathematics that’s often behind the technology.

Take Google for example. A whole new web of economic activity now revolves around Google. (There is even a knock-off in China named Baidu that is prospering equally well there.) But it all started because two kids, one from Russia and one from Michigan, US – who trained themselves in mathematics, science and logic ended up working together to build a very smart search engine that could crawl the entire web to provide information far superior than any existing tools. (It was only much later that the company that they formed landed on the sweet spot of search-based advertising that opened the flood gates of multi-billon dollar revenues.)

Similarly, if we cut through all the hype and buzz around Mark Zukerberg’s billion-dollar social media phenomenon, we see that luck, timing and business savvy aside, Facebook came to life not just because Zuckerberg dreamed it up but also because he is deeply passionate about programming, loved software application since his middle school, is a mathematical thinker and actually could build the algorithm behind the application.

It takes a scientific and technology-literate mind to take an existing technology and make it better or find a new applcation. When Nintendo created Wii’s motion sensitive controllers for games, it opened the door for a whole slew of applications for gesture controlled devices. Take for instance Johny Lee a student at CMU who applied his knowledge of electronics and imagination to modify the Wii box to add head tracking functions. Only in a couple of years, Xbox Kinect 360 can now respond to players movements automatically and in 3D. And now robotics engineers and students in universities are tinkering with rescue robots that use the same technology to maneuver in rescue situations.   

It is largely the passionate practitioner of technology who develops and comes out with new ways of doing things that take the human society forward while politicians make their crafty moves to retain power, priests dole out their cool aid so people continue to look backward and traders stay busy to buy and sell to make that extra buck as they have been doing for thousands of years.

But nothing in life is unmixed. There is a dark side of science and technology for which they should share no blame. After all, science and technology are agnostic of human intentions. The same technology that enables space exploration is also used for destructive missiles that warring groups use against each other. The knowledge of bio-technology that has enhanced our health-span and improved quality of life of millions has also led to ammunitions of bio-warfare for the benefit of power-hungry governments, armies and fanatics. But that’s a discussion for another day.

The point remains that the vast majority of improvements and positive disruptions in human life come largely through progress and inventions powered by science and technology. 

Wednesday, February 16, 2011

Language, Artificial Intelligence and WATSON - the new super-computer Jeopardy Champion


This week’s Jeopardy IBM challenge pitted two human super-champions (Ken Jennings and Brad Rutter) against a custom-designed IBM super-computer named Watson. The game was intended to showcase machine capability not just for fast hard-core calculation but also for softer but equally complex skills like language processing. Thanks to the publicity in several media channels, I managed to catch the second and third game of the three-day series. Curiously enough, I found myself in an Us-and-Them mode, rooting for the human players (who of course lost thoroughly to the machine). 

It has been 42 years since Stanley Kubrick introduced the fictional super-computer HAL in 2001 Space Odyssey that was perfected to a human-like complexity to display emotion and language processing ability well beyond its artificial intelligence. The ongoing quest to develop a real-life HAL remains an evolving goal, although IBM seems to make that goal one of its promotional targets, while providing some entertainment to us.

First was the introduction of Deep Blue in 1996 to play chess against grand master Gary Kasparov. There was, however little human about Deep Blue, its chess moves still came from brute force computation of established rules.  Even that achievement remains embroiled in controversy. Kasparov, who lost complained of unfair human intervention by IBM to reprogram the machine during game.

Introduction of Watson on Jeopardy’s stage has passed without any such controversy. The human competitors seemed to gracefully accept its lightning speed and its agility with the buzzer. In fact one of them added the following sub-text to the final jeopardy response “I for one welcome our new computer overlord”

Over and beyond our cognitive abilities, language with its puns, metaphors, analogies, double meanings represents a core human attribute. One that gives expression to our personality, attitude and perhaps to more abstract concepts like our consciousness. In fact, voice and language abilities are often used to give human characteristics to animals or other natural objects in art, literature, movies. Computer scientists have even defined a “Turing Test” (named after Alan Turing) to judge a machine’s intelligence by testing its natural language or conversational abilities. An electronic system is said to have passed the Turing Test if its conversation is indistinguishable from that of humans.

With its stacks of ninety servers and instantaneous ability to process text of the answer, Watson's victory was not much in question. More significant part of the game probably was the few rare clues where Watson faltered. One example was a clue on what Shengen treaty opened up (I’m paraphrasing) – Watson’s response “Passport” was related but not quite the correct contextual one – “National Border”.  Probably the real-life machine still has some work cut out to pass the Turing test !


Regardless of those rare faux paus, Watson definitely represents a milestone for computational progress at processing natural language and parsing convoluted statements often with multiple meanings. We sure are making progress to the day when we may have “Turing Test” compliant appliances, cars and droids that would interact more “humanly” with us. 

Sunday, February 13, 2011

Can two turkeys make an Eagle? The curious alliance between Microsoft and Nokia….


All technology news columns today headlined the alliance between Microsoft & Nokia – the two old giants, respectively in software and hardware who so far individually failed to make a dent in the hot and raging US smartphone market.

The tweet from a Google executive, as reported in Wall Street Journal (Feb 12, 2001) was pretty clear – “Two turkeys do not make an eagle”. Interesting comment, but perhaps a little too early.

Those using a smartphone (i.e. those portable computers that run nifty apps and also doubles as a phone) know that the market here in US is largely a “Trio-poly” – dominated  by iPhone, Blackberry and Google with its band of phone makers running Android operating system.

Nokia is a dominant phone maker in he rest of the globe with its own OS – Symbian. These Symbian phones still have the biggest market share outside the US. In fact in many developing countries, Nokia is synonymous with mobile phone, just like Xerox was once synonymous with photocopy. Yet Nokia’s smartphones did not quite click in the US and its eco-system of apps, based on Symbian never quite caught on.

Microsoft had a similar story from the software side. As this tech dinosaur continues to have a thriving cash cow in its continuous re-incarnations of Windows OS, its forays in to successful hardware so far is limited to Xbox gaming system. Its repeated attempts at music player (Zune) and phones so far did not quite pan out.

Enter Stephen Elop, the new Nokia CEO, freshly harvested in to Nokia from Microsoft who promptly goes in to action, quickly pushes Nokia in to bed with his previous employer. In the process, Nokia is also sidelining, not just Symbian but  MeeGo, the budding open OS it was sponsoring specifically for smarphones.

On paper, the alliance is complementary and hence should be promising. One lacks a successful hardware platform and the other needs an OS that could pull it in to the upper echelons of smartphone market. Success or failure, the unfolding story would certainly be a sure-shot entry in to the case-studies that business schools love to teach in their strategy courses. 

Saturday, February 12, 2011

Social Networking and Political Revolution


Long before Mark Zukerberg became TIME’s Man of the Year, James Buck, a UC_Berkeley journalism student was arrested in Mahalla, Egypt, on April 10, 2008 while covering an anti-government protest there. On his way to the police station, James sent a one-word twit to his friends from his cell phone – “Arrested” … His fellow Twitteres spread the word; within hours the University and the government of the USA were alerted… diplomatic actions were set in motion and Mr. Buck was released.


As president Obama said yesterday “Wheel of history turned at a blinding pace” over the last two weeks in Egypt, and social networking had more than a fair share.  Yesterday, when CNN asked Wael Ghonim (now famous  activist and Google employee): "First Tunisia, now Egypt. What's next?" Ghonim's intriguing response was "Ask Facebook."

From a small start-up phenomenon, social networking technology has blossomed in to a force way beyond its fad-appeal. True, people continue to demonstrate an unstoppable obsession of reporting round-the-clock snippets of their daily life, profound and not-so-profound private details continue to clog the Facebook walls, yet fundamentally there is something at work… something that is very basic and human. Ultimately all the new social networking tools cater to a more innate genetic trait – the urge to connect, to communicate, to congregate, to share – both sorrow and success, to be in touch, to support and be supported and yes, to gossip and to show-off.

We had the same need when we were hunter-gatherers and congregated around the cave-fire at the end of the day. That’s why the big tree in the village square was always the place where everyone met and made big decisions, shared all the local news and gossip, where the young got advice from the elders.

The Twitter and Facebook (and their predecessors like MySpace) are raging because they enable the same community network under the village tree, albeit virtually and bring people together around the same ancient human urge. They are addressing a need hard-coded in our DNA. Not surprisingly perhaps, just this past week, we learnt Twitter as a business is being valued at a whopping $8Bn-$10Bn. This is for a company that had a 2010 revenue of only $45M and estimates its 2011 revenue to be around $100M. Facebook with its 600M members already has a valuation of $50Bn. And it is no where near its IPO. Hype or bubble? Could very well be… But meanwhile the role Twitter played in Egypt back in 2008, and the association both of them  already have with the Egyptian revolution and one its more famous Facer Wale Ghonim is nothing short of historic.


Welcome to political engineering with social technology !! Happy Twittering.

Thursday, February 10, 2011

The new Smartphone economy & its "Blue Book"

If PCs became the symbol of the beginning of information age in nineties, then phones, particularly smartphones have certainly come to define today’s networked life. This time, it is also a global phenomenon – enabling the school children play games, amateur traders buy or sell their stocks, farmers in remote rural areas get weather news to plan their crops and on and on.

In India, where I grew up, landlines used to be a relatively premium utility. The past decade has changed that completely as land lines gave way to cell towers. Mobile phone is now the ubiquitous symbol of connectivity across the whole land, available to people of all walks of life – the rich minority and the poor majority. As an Indian friend of mine reminded me recently that to a large swath of working class in India – the central aspiration was to secure basic food (Roti), clothes (Kapda) and shelter (Mokaan). Apparently these three aspirations are now upgraded to add a fourth one – Roti, Kapda, Mokaan and Mobile !!

Here in Michigan, where I’ve been living for the last 19 years, automobiles defined the life and culture of most of the state. As the local auto industry went through its rough patches, the falling residual value of used cars, as published in Kelly’s Blue Book were closely followed as a barometer of the industry’s health. While reading an article in today’s Wall Street Journal, I was therefore thoroughly amused by a reference to “Blue Book” to describe a new budding market of used smartphones – one more reminder of how Smartphones are being entwined in to our life and economy.

According to the article, 344727 old or used iPhones were sold on Ebay in 2010. This secondary market continues to thrive as more and more consumers go online to buy or sell through firms like Gazelle and NextWorth – so much so that the journal produced a “Blue Book” rating (Gazelle.com) of the residual value of some of the well known brands as follows – iPhone 4 retains 60% of its original value, whereas Droid X holds only 42% and 4G EVO holds 44%. Blackberry Bold 9650 in comparison holds only 27% !! Apparently consumers can also trade their devices to get store credit in retail store like Best Buy. Just like the car industry, implications for the phone industry could be significant… high resale values could further enhance brand, as it does for automobiles with above average trade-in values.

Welcome to smartphone economy !!