I’ve guessed right
It is an overview of the technological and social forecasts* by author of the blog that were justified in whole or in part. It can be used as introduction to the blog and to assess its creative-futurological efficiency.
I suppose that in the next 10-20 years important transformations will take place in the structure of the world economy. They will be associated with the advent of online services that can integrate remote entrepreneurs, their partners, customers and remote workers throughout the supply chain of business processes and provide information support of this chain management. I returned this topic several times: in June of 2008, in August of 2009, in February of 2010 (and, at various times, on paper).
If briefly, described services will allow telebusinessman to put the idea of a new product, to find out the potential demand, to assemble a team of engineers, designers, marketers, and so on, to put the prototype, to collect pre-orders, negotiate with the manufacturer, carriers, trade networks, to split automatically the revenue (and even, perhaps, to pay taxes in accordance with national legislation).All these actions will be available in a single user interface. This scheme will allow businessmen to minimize costs and risks, eliminate the need to rely on fast losing the dynamism and ‘growing with fat’ traditional corporate structures, reduce the dependence of their business projects on bank lending, as the money to run the project (at least part of the required amount) will be gathered from buyers and some settlements in the system can be done by an automatically build effective barter chains.
As the result of the introduction of such services a three-tier model of international business may be got. At the top level there will be a limited group of global companies which earn with the creation and promotion of innovative products (by and large – in the development and sale of ways of life). A kind of “technology designer” will be created by these companies. The companies of the second level license individual “cubes” of the designer and create original products with new consumer functions designed for different groups of consumers and personalized sales. The companies of the third level are producing utilitarian consumer goods, copying the most successful products of the second-level companies.
Internet services, taking on certain business processes of remote enterprise, permanently enter into a market. Thus, in November 2008 collaboration services discounts Groupon.com (in fact, allowing the company to collect pre-orders of users) was based. In 2009 Kickstarter.com was created, that helps authors of creative projects to raise money for their implementation by national funding (crowd funding). Also in 2009 the startup Quirky.com, creating a social development environment, was based: the inventor proposes the idea of the goods and service provides its discussion, creation and sale of the product – and pay interest to the auth r of idea. Also, there are many services in the market that allow you to collect effective teams for various projects. Sooner or later (and, according to my forecasts – before the end of this decade), all these business models will be combined into a general service, in a single information environment for the efficient remote business.
The principle of the continuity of life
Communication of modern man with computer devices will be much more comfortable if he can push of a button to switch to the new device (be it a smart phone, PC, notebook, media center, video wall in the house or office, etc.) and continue to communicate with a remote interlocutor (video game, watching a movie, listening to music, etc.) in the same place where he left off in the previous device. In May 2008, I proposed the principle of the continuity of life, allowing us to reduce to zero the switching time between digital devices that perform one and the same task.
The first step in this direction was made by the company ASUS, providing, in June 2009, the concept of continuous digital life, which fourth line was formulated as follows: “…a continuous flow of data can synchronize PDAs, mobile phones, personal computers and laptops. Thus, the user can work with the file, and the edited version of the file will appear as a result of sync on all your devices.” Another step in the same direction was announced in November 2009 concept of “three screens and a cloud» by Microsoft, which assumes that all the “cloud” services must be equally efficient in using of computers, phones and TVs. In October 2011, an application Intel Pair & Share appeared, that allowed to transfer photos to the laptop, PC or TV screen, with one touch of the screen of smartphone or tablet. And a few people at a time can show photos in this way to each other. Certainly serious consequences in this context will have published in August 2012 Google study “The New Multi-screen World”, which revealed two main user modes: a) the movement of the user from device to device, to complete one goal, and b) the simultaneous use of multiple devices. Finally, there are rumors that one of the “features” of the future TV Apple iPanel will possibly start watching a movie on TV and at any time to continue it from the same place on the tablet.
Life without transport “pauses” + “video portals”
In May 2008 I supposed that in the XXI century the main indicator of the development of transport systems would not be the increase of the speed of motion, but increased ability to complete the work, study, entertainment, remote communication during the trip. And it is another manifestation of the principle of the continuity of life, which has been discussed above.
Along the way, the work of “video wall” had been described through which passengers would communicate with friends and colleagues who were in this point at home, office, shopping center, a disco, etc. High-definition video, demonstrated in both directions on the screens of a man’s height, make the interlocutors to forget that they are separated, the illusion of the physical presence of a remote office or a guest at the party will be created and a passenger will even be able using gestures to control appliances in his own apartment!
It seems that the idea of the maximum productive use of time travel can for the first be implemented by air-carriers. In June 2011, the company Airbus introduced the concept of interior of aircraft in 2050 – Concept Cabin. Areas and possibilities for relaxation, massage, aromatherapy, immersive virtual reality, holographic gaming, communication, work, business meetings, etc. are provided in the cabin. Concept Cabin authors introduce a new term: “seamless journey” – implying that the passengers during the journey have to continue to do all that they would be engaged at the earth. This is absolutely consistent with the idea of continuity of life outlined in the blog “The ideas of the future”.
As for the “video walls”, it is likely that they will grow out of modern telepresence systems, which are increasingly used in corporate meeting rooms. In future telepresence systems screen size and picture quality will increase so much that can provide the effect of “seamless” connection of two spaces (for example, two rooms, separated by hundreds of kilometers). I would venture to guess that the first video wall of such quality can be named “video portals” – for the psychological effect of “teleportation” (of course, this is only the effect, and you can not stay in the new place physically or share with a remote interlocutor things, handshakes, hugs 🙂 ). Perhaps, Magic Window, launched by Microsoft in 2011 (see video), is about this project. As the head of the project, Stephen Batish, says, project purpose is to ensure that people that are separated by a considerable distance, would interact, as if they are on opposite sides of the window. And this is pretty much what one wants.
In March 2009, I predicted the emergence of technology, which allowed to feel direct contact when you touch the virtual 3D-models hanging in the air. I should add that a few days before, I had assumed that the virtual three-dimensional copies of furniture, furnishings, sculptures, etc. would decorate the home of the future. This will reduce the production of many things, without which the consumer can do, and thus reduce the load on the ecology of the planet.
Already in July 2009, there was a YouTube presentation of technology Touchable Holography, developed by scientists from the University of Tokyo which allows touch hologram and interact with it without intermediary devices: hand touching a virtual surface, rotation, etc. I should add that, in September of the same year, Reuters published an opinion of the leader of the development team Touchable Holography Hiroyuki Shinoda that “this technology can be used to replace physical objects, making it economical and environmentally friendly.” In 2011, similar projects RePro 3D and DisplAir appeared.
In May 2008, I ventured to guess that in the future people would be able to find common ground and better tuned to one and the same emotional wave thanks to the emergence of gadgets that can “signal” about the emotional state of their respective owners.
In March 2010, the University of Tokyo experts presented a prototype of “emotional” phone, with handset that turned cold or warmer, depending on what kind of emotions that remote interlocutor experienced. And in April 2010, designer Vanessa Sorenson introduced the concept based on LED elements, which were mounted on the clothing and change color depending on the change of emotional coloring of messages in Twitter-user account.
Orientation in the premises
In early February 2011, I predicted the emergence of technology that made it easy to find the way inside the large business centers, shopping malls, concert halls, sports arenas, etc. On entering the hotel, you get a signal proposal to download three-dimensional scheme of the building, through which you can easily find any offices, shops, banks, cafes, call centers, conference rooms, toilets, etc. in the area of this object.
In May 2011, the project Google Street View has announced a new service for owners of cafes, restaurants, shops, hotels, shopping and business centers, railway stations, airports, museums, etc. – and already in October the results of panorama survey of many public buildings in several cities in the world were integrated in the “Maps Google”. In September 2011, Locata Corporation introduced a highly accurate, up to 1 inch, positioning systems for industrial, storage and other spaces where the GPS signal is not available. In March 2012, there were details about the new chip for smart phones and tablets Broadcom BCM4752. Using data once the four navigation systems: GPS, Glonass, QZSS and SBAS – it is able to calculate the user’s location to within a few centimeters, even indoors (including to define floor of the building). In April 2012, the company introduced the first prototype of the Microsoft navigation SemanticMap, capable, in particular, suggest the way in the office labyrinth.
The wall that unites
In July 2009, I proposed in the city holidays to place citizens’ graffiti and their wishes on the subject: “I want my city would …” on an ownerless wall. Such walls (plus their virtual analogues in the Internet, and also published books with suggestions of citizens) could become the cards of different cities.
Similar art project «Before I Die» started in New Orleans in 2011 and has covered more than a dozen countries. Passersby are asked to select crayon and graffiti continue to the phrase “Before I die, I want…” In 2013 it is planned to publish a book to summarize the results of the project.
Nonlinear texts editing
In March 2010, I proposed the concept of non-linear editing of texts and a version of its practical application in book-reader.
In June 2011 there was a first, in my experience, software that supports the ideology of non-linear editing – a program LiquidText for tablets. In particular, LiquidText allows with a light touch on the screen to select, to copy, to combine / separate fragments and comments relating to one or more texts, as well as to go easy from viewing the text as a whole to view only selected fragments.
“Video archive of the Life” + “E-alibi”
In May 2008, I assumed that in the near future, each of us would be able automatically to collect the most complete video archive of his life and discuss its pieces with users of social networks to deal with life and psychological problems (before, this idea was published in my Live Journal). At the same time, I paid attention to the fact that the active use of these devices by people could frighten off robbers or provide an alibi for their owners in the event of unfair accusations (a year later, in another post, I suggested for such cases the term electronic alibi).
In autumn of 2008, the concept of unusual gadget Experience Recorder was announced: “gloves of sensations” that allowed with simple gestures (outstretched or folded in the “telescope” fingers) to broadcast straight to one’s blog photos, videos, sounds of the world and one’s temperature.
In the spring of 2011 release of glasses Eyez was announced in the United States, allowing owners to broadcast in a real-time and stored in the “cloud” video-audio stream of their life to further demonstrate them in blogs and social networks. Production Company promised that this gadget would revolutionize network. This has not yet occurred. But there is confidence that Google can do it. Surely these services will be created for users of augmented reality glasses of Google Project Glass, whose sales will start in early 2013.
As for the police, in 2010 the U.S. police began using fastened on the head camera to record incidents while on patrol. And in 2011 the staff of the Russian colonies were equipped with personal video recorders, to record their actions during the entire shift. Meanwhile it goes from the government. However, there is no doubt that the citizens will not be less actively using to protect their rights carried by a variety of recording devices and transfer video in the “cloud”, as soon as such gadgets will be on sale. And the practice of “electronic alibi” will be extended de facto and de jure.
In February 2009, I described the idea of e-graffiti technology. It enables individual homes, offices, streets and the whole city in a few hours (and, if necessary, instantly) to change its “skins.” For citizens e-graffiti technology would become the possibility to communicate with each other cultural signs and to be engaged in endless performances of decoration of the city. E-graffiti is “non-destructive” way to draw on the walls of houses, porches, rooms, at any time erasing irrelevant or not artistic graffiti. The essence of technology consists in a special transparent coating the walls, which contains in its pores the magnetized colored balls. In those places of the walls along which you are conducting a special magnetic charm, the positioning of the balls changes – thus changing the color of these fragments of the wall, pictures appear that you can always delete with the same charm.
Artists and technologists periodically offer ways to adapt big screens to electronic graffiti. For example, in July 2009, a computer system YrWall was represented in the UK. One can draw with the infrared “colors” on the screen with size of 1.7 x 3, 1 m. And in August, the French conceptualist Antonin Fourneau introduced LED screen on which you can draw with moisture, for example, a wet brush or palm. These are wonderful social and cultural inventions, which have only one defect: links to space and time – to a certain room, cultural event, party, etc. This is not a “canvas” on which any passerby can leave the graffiti at any time. So I would like to draw the attention of people in the arts and technology, that in March 2011 material, “chameleon”, changing color by the magnets was introduced. This is exactly what will allow realizing the idea, as described in our blog: to involve many of the townspeople in the cultural life of the cities.
Play-panorama of sound sources
In May 2008, I proposed the idea of a revolutionary interface for managing MP3-players and other audio devices. When the user wants to select a new song (audio book, FM-radio, etc.), a few tunes at the same time (audio books, etc.) start to sound in his headphones from different parts of the virtual 3D-audiospace. To choose a song to listen to, the user only turns his head to the side of the corresponding virtual sound source, for example, nod with his head or do another activity. I even ventured to suppose that this gadget would be a hit Christmas sales in 2010!
Of course, the hits are determined by buyers. However, the technical ability to create such an interface first appeared in 2009! First, in May 2009, the largest Japanese mobile operator NTT DoCoMo announced the development of technology for mobile phones that allow you to set the virtual sound sources . For example, a mobile phone user can assign to each member of the conference call a single sound point to navigate in 3D-audiospace just like a real table meetings. And in October 2009, the same NTT DoCoMo introduced the headphones that can translate eye movements into commands to control the player. It seemed that God ordered to cross the two technologies – but apparently, their developers had worked in different departments. Meanwhile, I’m sure that sooner or later virtual “play-panorama” will press a standard play list in the audio devices.
In March 2009, I described the potential of the creative application of technologies of augmented reality, when the user could transform (for him) the appearance of the world: the interiors, architectural landscapes, weather and time of day, the appearance of people, etc.
In December 2009, Magic Vision Lab experts have developed the first system of augmented reality, which allows “impose” weather effects on the real world (rain, snow, hail, etc.).
In May 2010, I proposed a way to kill with one shot straight two mobile rabbits : a) to save subscribers from promotional mailings sms-dispatches of mobile services sponsors, and b) to ensure the active promotion of these sponsors. To get rid of the mailing, the subscriber has periodically to dial manually and send to a free short code list of sponsoring companies.
A similar principle is used by the company SolveMedia, declared itself in September 2010. It has developed a new advertising algorithm Type-In, forcing users to log into the site manually duplicate the banner slogan. According to the test, which involved advertisers such as Microsoft and Toyota, the new algorithm improves the advertising storage from 3% (only when viewing the banner) to 40% (using Type-In).
Google, if you please
In September 2010, I made proposals to improve the functionality of Google services. In particular, it was discussed the further integration of some services, such as GoogleDocs (now the «Disk Google»), Gmail, Picasa and others, as well as to simplify file management in GoogleDocs by «one touch».
Of course, Google was on his own way. 🙂 But I’m glad that this road is basically the same as that according the wishes of users and, in particular, your humble servant. 🙂 Some time after the publication of this post it became available to embed photos from albums Picasa in documents; some sort of Explorer appeared in Docs Google, it became easier to manipulate files and entire folders, and then – in “disk Google» – a separate application appeared for managing files embedded in Windows Explorer and the operating system of smart phones, it has been introduced the possibility to conduct video meetings (in Google+) and some other functions to improve efficiency in the medium Google. We look forward to the continuation!
“Faceted pixel” + “Multivision” + “video architecture”
In August 2008, I predicted the appearance of screens, consisting of special items (I called them faceted pixels) that could show in different directions very different picture.
I suggested two areas of using these screens. The first area (in conjunction with the eye-tracking technology) was interactive “Multivision”, according to the principle of “one window – a lot of translations.” Examples: the extremely targeted outdoor advertising, individual “tuning” the film’s plot for each viewer in the cinema, the selection of different training exercises for different students in the classroom, etc.
The second direction is the use of “faceted” screens in the architecture of the distant future. Assume that the transmission of all the screens that cover the architectural object is synchronized, and this single hyper screen is able to “translate” for each of the surrounding space separate image (including the background image – the sky and the urban landscape, hidden from the observer by the architectural object.) This will allow to achieve striking effects. For all external observers it can be provided a synchronous transformation of the external appearance of the building or the whole architectural complex, dynamic change of its shape, color, texture and style elements. For example, the standard skyscraper-box may in the eyes “become” the cylinder, the sphere, the medieval castle, the Roman Coliseum, etc. – or even “disappear.” If ever in the future something will play out like this, we could speak about a new art form – video architecture (dynamic perception management of architectural objects).
As for the “Multivision” – until we see a set of separate steps that can lead to this technology. Thus, in December 2008 in Tokyo, was established the world’s first billboard, which monitors pedestrians (though they all saw the same picture, the system kept count of those who, in principle, paid attention to advertising). In February 2012, on one of the bus stops in London billboard equipped with face detection, showed different variants of social advertising for men and women. Screens are also improved. In October 2010, Toshiba introduced the world’s first commercial 3D-TV, which can be viewed without special 3D-glasses due to the formation of two different images for the right and left eyes of the viewer. In February 2011, Sony demonstrated the technology to play two different games on one monitor, or watch two different TV channel. Specialists at the Massachusetts Institute of Technology went even further. By March 2011 they have achieved conservation stereo regardless of the position of the viewer in relation to the screen.
As for the “video architecture” – it is clear that her appearance may become real at least after the transition to the using of “gifts”, from the current point of view, energy. However, in September 2011 at an exhibition in London technology Adaptiv was demonstrated allowing to do the “invisible” rather large objects like a tank. So far, the invisibility is possible only in the infrared, but the engineers are working on ways to mask the other regions of the electromagnetic spectrum. I am glad, however, that Adaptiv project uses about the same principle, which has been discussed above: the system analyzes the surrounding background, hidden with objects from the observer – and simulates this background with the so-called hexagonal pixels.
* For information: from February 2008, the author of this blog practices “freezing” the date and content of his predictions in an immutable form of the service Webcitation.org.