Even though it would be prudent to define Web 3.0 in concise terms that exclude uncertainty, it is not possible as yet. Precisely because Web 3.0 is not a definite product or service, or even a spectrum that has structured guidelines.

However, what Web 3.0 essentially is, is the next step in the evolution of the World Wide Web from a mere depository of information on interconnected networks, to the point where that vast repository makes sense to the primary agents that access it, viz., software agents.

Comparison with previous versions:

The World Wide Web when first launched, was just an interface to access data stored on standalone terminals or servers. Web 2.0 (a term whose validity is often debated by industry faithfuls) came out as a phoenix out of the dot com bubble burst. It was purported to be the re-birth of the Internet. However, it only added upon established underlying principles of the World Wide Web (eg: HTML as a base and use of AJAX over and above that).

Even so, Web 2.0’s contribution to the World Wide Web is a slew of services aimed at facilitating collaboration and sharing between users. Most notable in that direction was the advent of social networking sites, blogs, audio/video posts, podcasts, wikis, IMs etc.

It has also seen the rise of powerful search engines that can rip into the guts of a page and extract relevant data. Except that there is a catch to it. Even the most powerful search tool needs the brains and thought process of a human to guide it to the right page, or load it with a generous dose of keywords to empower it to come up with the intended results.

Web 3.0, on the other hand, aims to transfer that thought process directly to a search engine’s/software agent’s mode of operation. It aims at a World Wide Web where all data will be easily understandable by machines, like we humans presently do, thus ushering in the age of Intelligent Computing, and as an extension, Semantic Computing.

Benefits:

1) With Semantic computing as its soul and guiding light, Web 3.0 will open up the astronomical amount of data on the web to intelligent analysis.

E.g.: Let’s suppose that Eric, 22, wants to touch base with an old friend of his, Tracy. The problem is, they were both 8 years old when Tracy left their hometown of Auckland for Geneva. Having lost contact with her after that, Eric is not even sure if she is still there or if she has moved to another country. All he now knows is her age, her last name and the name of her mother, the name of the school they both went to and the year she left for Geneva.

A contemporary search engine employed for the task of looking up Tracy would probably draw a nil if Tarcy’s present information is not explicitly mentioned. Not because it does not have enough data to search with, but because it does not have the inherent capability of putting all of that data to intelligent use. However with Web 3.0, a search tool would co-relate all the data it digs up from school records, family names, immigration records and national/international travel logs, analyze and sift through the promising ones, and hit upon the one trail that will lead it to Tracy, through a maze of seemingly unrelated web content.

2) The second benefit, derived from the first, would be the possibility of delegation of the responsibilities of data searching and collating and analysis to computers themselves. This would leave humans to focus on the big picture, while the data and its logistics will be silently controlled by machines under constant interaction with each other.

3) It would also enable machines operating on and from different databases and platforms to successfully exchange information with one another, primarily because of the underlying artificial intelligence now possible with Web 3.0

4) Automation, as we know it today, is really a series of planned codes that machines are programmed to follow. But with semantical computing, computers will actually be able to take most simple decisions for themselves, and even complex ones if so possible, making commonplace human intervention redundant.

Limitations:

Industry watchers however, are skeptical. Their prime areas of skepticism are based on the following:

1) It is argued that it would be time -consuming for content to be published in two formats: one for humans and the second for machines. Unless a method is devised to automatically generate machine-friendly data formats, this concern is valid and critical.

2) Invasion of privacy and censorship : With an artificially intelligent Web, data creation/modification could easily be traced back to the originator(eg. : tracing of bloggers and webmasters). This could potentially violate individual privacy and may even lead to forced censorship.

3) Although Web 3.0 sounds great and one would expect it to go mainstream soon, how realistic it would really be to expect intelligent behavior from machines, particularly considering the whims and fancies of human expectations, is doubtful, if not entirely impractical.

Conclusion:

With Web 3.0 a real possibility in the evolution of the World Wide Web, one can look forward to a new array of web services, characterized essentially by a degree of artificial intelligence. The fact that the mammoth data archive of the web would be open to analysis across various platforms, would make online services much more resourceful. What does that mean to the common netizen? Lesser exercising of his own intellect in data mining, collation and decision making. In other words, a much faster, intuitive and productive web experience.