Submissions Phase II (statements, sketches, future paper abstracts)
2077 – When black is white: Perceptual stress associated with information-affordance syndrome (Dyspercepia)(11/2/2014) -
2077 – Genetic modification alter pigmentation in Ailuropoda melanoleuca to indicate conservation status(11/2/2014) -
2020 -Welcome to the Visual Home: BC Architects, planners and researchers prototype visualization services for informative buildings(10/28/2014) - What happens when data are ubiquitous in our lives, our homes are completely networked, and information pertinent to decisions we make in daily life are virtually at our fingertips? The approach that hallmarked the first decades of the digital home has been to supply residents with an assortment of standalone apps, websites and social media for retrieving, monitoring and and analysing data from a plethora of sources, and to design specialised views that are particular to devices like tablets and phones. But this doesn’t work in an increasingly fluid and dynamic information landscape where information-driven decisions happen throughout the home in ... Read More
Submissions Phase I (scenarios)
The design of a visualisation invariably depends upon the task(s) and the target user group it is designed to support. Exploratory and explanatory visualisations generally require different considerations. We consider the future of visualisations from this perspective.
It’s easy to think of information visualisation in terms of consumer products, brands and advertising but their adoption will reach much further. New industries, schools, even militaries will invariably adopt visualisation systems for better or worse. What happens when they do?
…ElectroEncephaloGram(EEG) technologies and our ability to technologically sense electric fields evolve significantly past where they are now.
Meanwhile human brain to computer communication using body-embedded-systems becomes De Rigueur, becoming extremely sophisticated and compact: we begin to embed these in our bodies – our communication, wayfinding and augmented analysis and processing are encapsulated in ourselves.
Your Will is the command for your own Personal Organic Network(PON)…
It matters little where the data, connection and files are as the interface is your own body interfaced invisibly with the Network at large – functioning as as one: smoothly, invisibly – a question is asked…and answered in imagined visions in the canvas of the mind.
I’m interested in: creative design, finding a needle in a haystack, explaining a complex medical treatment to a worried patient, delivering healthcare, helping different cultures understand each other, helping families stay connected, making scientific discoveries, art. For these, and for the visionaries who work on these tasks, the desktop is already on life support.
Migration from desktop-based to alternative out of the desk visualisations, could help to overcome some of the issues that are related to traditional screen representation problems. But simultaneously, it represents a challenge in which many technical and ethical questions have yet to be answered. Some thoughts about how visualisation outside the desktop could be applied are presented through three different categories: common objects, large scale augmented reality and natural devices.
HotDesking in 2031 with technologies that interact to detect, store, relate and display information in the physical environment(10/13/2014)
A busy academic uses emerging technologies for filling his world with visualization to communicate with colleagues and order a sushi sandwich in hot-spaced London 2031.
Modern films such as the Iron Man series, Avengers, and Pacific Rim best exemplify visual interface designs that are futuristic, follow fluid interaction guidelines, and are yet not too distant. These movies show interaction models designed for direct manipulation of real and virtual objects in holographic projections, and also embodied interaction in completely immersive environments. Furthermore, these imagined interfaces have their own envisioned application domains ranging from casual computing, information browsing to creative design and even analytics. A common aspect among these many imagined futuristic user interfaces (FUI) is projection of different types: (1) head-mounted, (2) holographic, and (3) immersive projection. In this paper, we imagine the interaction models that can best-fit each of these projector display types when they are adapted to visualization and visual analytics. For this, we consider interaction models that go beyond a desktop to utilize implicit aspects within the environment such as proxemics and explicit actions through direct manipulation, gestures, tactile, and other forms of multi sensory feedback. We borrow application scenarios from the aforementioned movies and the general guideline behind our discussion is that projection type guides the interaction design.
A Dystopian Preview of How Visualization will Adapt to the Split of Society in ”Have” and ”Have-Not”(10/13/2014)
We are writing the year 2100, due to excessive usage of monitors human eye capability has dramatically diminished and the ability to speak has vanished. Humans are wearing digital glasses to see the world and get predictive information, and are communicating through instant thought messaging captured by neuro-captors. This evolution has plunged our society as we know it into a “Have” and “Have-not” society where the rich can chose what they want to see and the poor are submerged and flooded with biased information with the sole goal of making the rich richer and the poor poorer.
In the society that followed, knowledge demanded such precision that all decisions were based on analysing hundreds of possibilities using thousands of calculations and producing millions of data. In the course of time, the impracticality of a growing library of data led to the employment of Polyglots who curated the data, creating encyclopaedias and then charts describing their contents.
Desktops can be replaced with collaborative environments utilizing a combination of large scale screens for overviews, collaborative analysis and presentation; mobile devices for focused interactions and local exploration; and combinations of devices for layered visual composition.
Information analytics has been democratized. Personalized visualizations are prevalent and surround us… literally. Information auras housing our personal data aid in interactions with others by surfacing current topics of interest – our likes and dislikes. Rather than being tethered to smartphones or other devices, our auras house all of our information and we interact naturally through gesture, mental interaction and tangible computing. Our relevant data is made visible in our aura based on whom we are interacting with. While in groups, our auras fuse based on commonalities and topics of interest in conversation and an intersection of values and passions. Visualizations showing topical convergence, divergence and procedural guidance emerge. It becomes more time efficient to work with others using these highly personalized collaborative aura overlaps than unstructured conversations of the past. Introverted behaviors have become the social norm. Each individual’s private, personal data is “underground” or hidden to protect our information from others.
“Le roi est mort, vive le roi!”; or “The King is dead, long live the King” was a phrase originally used for the French throne of Charles VII in 1422, upon the death of his father Charles VI. To stave civil unrest the governing figures wanted perpetuation of the monarchs. Likewise, while the desktop as-we-know-it is dead (the use of the WIMP interface is becoming obsolete in visualization) it is being superseded by a new type of desktop environment: a multisensory visualization space.
This `space’ is still a personal workspace, it’s just a new kind of desk environment.
Our vision is that data visualization will become more multisensory, integrating and demanding all our senses (sight, touch, audible, taste, smell etc.), to both manipulate and perceive the underlying data and information.
It is the year 2039, the desktop is not dead, and it does not look like this situation will change for a while. In any practical application domain in which data visualization is used, the desktop remains to be one of the most important tools for data exploration, analysis, and processing. Since the year 2014, non-desktop platforms for data exploration including large displays, immersive environments, tangible controls, and mobile devices have found their place for data visualization applications—but they have not and will not replace the desktop in many practically relevant tasks. Instead, researchers have finally begun to work toward an interactive visualization continuum that allows researchers and data analysts to transition between the different platforms and to use the tools for those tasks they support best: the desktop for in-depth, single-user analysis and novel platforms for group discussions, mobile data access, and/or good spatial perception.
Our approach to the future of visualisation focuses on experience as a central concept, questioning what is considered information or data, moving to multimodal, multisensory forms of representation, and redefining the designer as an artist with a critical perspective who works with a range of media and materials.
For many the next few years will see the end of local government in England as we know it. But it won’t be the end of local government. It will though deliver its services in a radically different way.
For visualisation the issues are reassuringly familiar, but still unanswered by the discipline: how do you make sense of ‘Big Data’ to make better decisions across a diverse audience.
Lance felt a buzz on his wrist, as Alicia, his wearable, informed him via the bone-conduction ear-piece – ‘You have received an email from
Dr Jones about the workshop’. His wristwatch displayed an unread email glyph icon. Lance tapped it and listened to the voice of Dr Jones,
talking about the latest experiment. At the same time he scanned through the email attachments, projected in front of his eyes, through
his contact lenses. One of the files had a dataset of a carbon femtotube structure.
- A short story about the synergy of visualization, wearable and ubiquitous computing, and augmented/mixed reality.
We envision a mixed-reality future where there will be computers everywhere and all around us. We shall experience and regularly use virtual, augmented and hybrid reality systems, exploring information in an amalgamation of the physical and computer-generated space. These systems will be integrated across geography and will deliver powerful content seemlessly both at home and at work. Interaction opportunities with such systems are numerous and new modalities become available with each day. In coming years, we believe interaction with these systems will become a lot more standardized in both 3D spatial and 2D mediums. The interaction designs will borrow significantly from our daily natural interaction metaphors, supported by proven designs of techniques from the human-computer interaction community. Multi-modal and multi-party visualization will be made possible by the availability of commodity level display and interaction devices, supported by strong network connectivity capable of delivering vast amounts of data in real-time. This will result in transformative progress in the sciences and will significantly improve the quality of our lives.
We explore shared-memory workstations as compelling alternatatives to desktops and small clusters, for purposes of scientific visualization. With new manycore CPU hardware on the horizon and the current popularity of large-memory “fat nodes” in HPC, SMP workstations are poised to make a comeback. These machines will augment, not replace, HPC and cloud resources, providing both remote visualization and more personalized vis labs. They will be accessible anytime, anywhere on any device, running a single operating system, capable of handling all but the absolute largest scientific data. We describe current state of the art, emerging trends, and use cases that could make the SMP workstation the dominant driver of high-end scientific visualization in the next decade.
Recent research in Visualization has focused mostly on data analysis systems for domain experts, but also considered presentation to external people in the form of storytelling. The established directions assume that the target audience has in inherent interest in the facts to be discovered, sometimes even to the point of them being willing to learn how to operate a complex visualization system and spend considerable time and effort. In reality, sometimes the opposite is true: people unwilling to face an inconvenient truth actively avert their eyes. As a solution, we propose the presentation of facts by experts who manage to gain a limited amount of attention by means of rapid and expressive visualization. Using conventional desktop systems, this method is hard to implement, but new visual channels will open up new possibilities.
While we can look to Hollywood for inspiration about the future of visualization and interaction with data, we must be cautious to recognize some fundamental differences between movies and reality. We explore three areas: complexity; magic; and augmented reality and examine their uses both within movies and potential uses on post-desktop visualizations.
At barely 1.5 centimeters across, each Cetonia scarab is a marvel of precision engineering. Designed from the ground up for agile flight, their integrated hydrogen chambers and a high-efficiency hover mode permit 15+ minutes of air time between charges. The hueSHIFT carapace is capable of displaying over 22 million possible colors and provides clear visual feedback in day or night with visibilities up to 1.5 kilometers. Integrated camera and sensor arrays permit full 6D reconstructions with composition profiling. From your wrist or a personal field station you can quickly deploy flights in automated formations to survey, measure, record, and manipulate almost anything.
Flash to order now.
This practice-led design research explores the deployment and use of a physical, non-digital visualisation tool to model personal social networks. The emphasis is on how people choose to represent their networks, what they choose to show, and how the process of creating physical representations contributes to the uncovering of an otherwise invisible set of relations. Research focus is on the construction of narrative meaning in a social context by a mixed sample of participants, and the development of instruments to support and mediate this construction. The research is intended to shed light on how people construct personally meaningful narratives about their social networks by creating physical visualisations of them. Experiencing personal networks physically by constructing them from everyday materials brings them into clear sight; to the forefront of haptic and phenomenological consciousness in ways difficult to emulate with computer monitors and touch screens.
We envision the following grand challenge: To develop a technology that enables users to visualize a spherical and volumetric environment without using traditional display devices as a medium. This technology will of course be realized step-by-step, for example, (i) first enabling direct simulation of any part of the pathway between optical nerves and visual cortex, bypassing the eye; (ii) next facilitating perceptual formulation or cognitive reconstruction of a single flat image; (iii) then a spherical vision; and (iii) finally a volumetric vision.