How Do Your Favorite Books Compare in a VR World?

John Martin , UX Developer , January 23, 2019

Finding hidden relationships within semantic spatial embeddings

Exploring Content (the traditional way)

If you could search your own library of books digitally, how would you do it? 

Would you do a simple text search for certain keywords or phrases? 

Simple text searches can be powerful but typically only answer the question "where are these terms mentioned in my books?" 

There are, however, some questions that would be hard for a simple text search to answer:

  1. How would I find content where phrasing does not match typical keyword searches?
  2. How do all my books relate to each other?
  3. Which of the books that I have not read would be a good place to start?

These questions are trying to access the "semantics" or the "meaning" within your content regardless of the actual words used.

Beyond the Simple Text Search

We decided to create a proof of concept (POC) for clients who want to understand how questions about the semantics of their content can be answered. Additionally, since Virtual Reality is a desired technology right now in classrooms, we decided to use VR to further illuminate the powers of exploring semantic information.

I was in charge of gathering the materials for this POC, and so I chose 14 books that I was already familiar with. Being familiar with the content would help me validate that the machine learning and layout algorithms were on the right track.

The books were all non-fiction and generally covered science / math topics with the exception of one biography (which you will see stands out like a sore thumb in the visualization).

Extracting Semantic Information

Extracting semantic information from text is now a common practice in Natural Language Processing (NLP). In most cases, a Machine Learning algorithm runs through collections of documents and assigns each document a semantic value (vector). Each portion of text can then be semantically compared to any other by comparing their semantic vectors mathematically.

Here were the Machine Learning algorithms we considered for generating semantic vectors for each document / node:

  1. Latent Dirichlet Allocation (LDA)
  2. Doc2Vec
  3. LDA2Vec from @chrisemoody

We were interested in creating a visualization that helped users discover more information about semantic relatedness. While LDA and LDA2Vec provides interesting information by forcing nodes into a configured number of topics, there is less additional information that can found by showing these groups in a visualization. 

Doc2Vec focuses on generating semantic vectors where the closeness of two vectors depends only on semantic similarity and not on a gravitation towards particular topics. This provides the user more to explore and does not give redundant information in a visualization.

Doc2Vec needed a list of documents to assign semantic vectors. Rather than use each book as a document, we decided to get more granular and assign each meaningful entry in the table of contents of each book its own semantic vector. This allowed semantic comparisons across every level in each book's natural hierarchy.

T--SNE projection of semantic vectors

Doc2Vec provided the semantic vectors. The next step was to make these semantic embeddings usable for a visualization.

Spatial Embeddings of Semantic Vectors

The semantic vectors that Doc2Vec generated were 300 dimensions. Humans can effectively visualize two or three dimensions. This meant the 300 dimensional vectors needed to be squashed down to three or fewer dimensions to make a visualization user friendly.

T-SNE (t-Distributed Stochastic Neighbor Embedding) was the first dimensionality reduction algorithm considered. It provided incredible results while preserving much of the local and global semantic relationships.

Topic vs. Book Centric Layout

T-SNE provided a topic-centric layout which is common among most current semantic exploration applications. We realized we weren't leveraging all the information that each book was giving us so we used a custom force-layout algorithm where the edge forces and node attraction configurations depended both on semantic similarity and relationship to parent and child nodes within each book.

Going with a custom "book-centric" layout allowed the following benefits:

  • The hierarchical information already present in the books can be leveraged. The authors spent considerable time organizing the hierarchical structure and we wanted to preserve that information.
  • We didn't need to run any additional algorithms to detect hierarchal organization of content because we already had it.
  • Existing familiarity with books can be utilized. By preserving book structure in the layout, users can use their familiarity with certain books as starting points in the visualization. This would allow a quicker orientation for users. Additionally, there would be clearer and more predictable boundaries between content a user is familiar with vs. not familiar with. This would expose more natural paths for exploration while providing solid anchors to help incorporate new unfamiliar information.

This approach did lose some topic-centric information that T-SNE provided by pulling nodes closer to the books they belong to. We found ways (see video down below) to expose this information through different means.

Custom dimensionality reduction layout applied to semantic vectors

The VR Experience

It was now time to come up with a visualization for the spatial embeddings we had generated.

Creating a cozy island where each node could be represented by a tree on the island was the manifestation we chose. This provided a landscape which allowed users to utilize the "Memory Palace" effect for enhanced information recall.

To make the island more interesting, a "height map" and "splat map" were generated using the embedding data to add additional detail on the terrain. 

The different colors in the splat map are used to render different textures on the terrain. White is used in the splat map to indicate the connections between child and parent nodes within books so that paths between nodes can be drawn on the terrain.

The height map determines the elevation profile of the island. The lighter the value, the higher the elevation. In general, the terrain is higher around nodes that are higher in their book's hierarchy. For example, the root node that represents an entire book should be at the top of a hill while it's chapters and sections are placed lower as the book spreads out over the terrain.

Splat map and height map generated from the layout of the semantic vectors.

From there it was a matter of iterating on design and interactions in the VR application. Unity was used for development of the VR app and an Oculus GO was the targeted headset for the VR experience.

The VR app allows users to directly select a node on the island, navigate throughout a book's hierarchy, and see nodes in books that have semantically similar nodes in other books.

Here is the result:

Outcomes

It quickly became evident that the proof of concept did provide efficient answers to questions that require a semantic approach.

1 — How would I find content where certain keywords may not be present?

The natural grouping of nodes by their semantics makes it fairly easy to locate areas to start finding nodes that are semantically similar to a topic in question. This works well if you are already familiar with a book that has a chapter / section that contains the text or topic you want to look further into. The VR app allows you to start with that book, navigate to the node, and then explore nearby nodes on the island or highlight the nodes in other books that have semantic similarity.

2 — How do all my books relate to each other?

Take a look at how some high-level topics map on top of the island:

overhead view of island with all nodes rendered as trees. Image shows natural topic groupings and interesting intersections of topics.

You will see the main topics that were expected to emerge like Neuroscience, Graph Theory, Chaos, and Information Theory.

The 3D experience also exposed additional semantic information that would be hidden otherwise:

  • Notice that on the left side the book "Steve Jobs: A Biography" by Walter Isaacson stands alone. It does not have any substantial overlap semantically with the other books. The visualization separated it appropriately.
  • The intersection of the Network Science and Neuroscience topics consists of a book called "Networks of the Brain" by Olaf Sporns. This book explores how Network Science can be used to study the brain. Wow!
  • The Probability and Statistics region rests nicely in the middle of all the science-oriented topics. I tend to think that probability is at the basis of everything, so it was nice to see the semantics and layout algorithms agree with me :-)

3 — Which of the books that I have not read would be a good place to start?

For those who may not have read one or more books in the map, they could quickly pick out a book that overlaps with books they have already read and look interesting based on how they are positioned semantically!

Additional Benefits From VR

The VR experience provided the benefit of tapping into the spatial encoding abilities of our brains. After spending minimal time in the VR experience, I found it trivial to recall the layout of the books on the island. It's as easy as drawing a map the way anyone would draw a map of their own home.

Consequently, I can also recall almost all of the semantic similarities across books using spatial triggers from the island. The "Memory Palace" effect worked wonders in this application.

Where to Go From Here?

The results of this proof of concept were exciting. It provided a whole new perspective on an existing body of knowledge and spurred more ideas on how to expand its impact:

  1. Allow users to drop whiteboards onto the island and write their thoughts and draw pictures on them.
  2. Provide users with reading plans across a new corpus that leverage semantics to optimize knowledge gain. Users could mark nodes complete so they can see their progress and anticipate new topics and connections that will exist from topics already covered.
  3. Refine the layout algorithm with some mathematical rigor to allow books to be added incrementally. This would be a great feature so users could see how their personal semantic knowledge evolves with the addition of new books.
  4. Take advantage of voice capabilities of VR headsets to do a voice search on text to find a related semantic area.
  5. Highlight paths across multiple books that are linked via semantically similar nodes (this perhaps could lead to fun exercises for students writing papers based on semantic similarities to tie multiple resources together).
  6. Perform studies on how "book centric" vs "topic centric" layouts effect learning/recall performance of a corpus.

Those are just a few of the high level ideas that popped into our heads. What are some other ways you see this approach being used to help students explore their content?

John Martin

UX Developer

John Martin is a Band III UX Developer. He has been with Unicon for seven years. He specializes in implementing accessible user interfaces that are responsive and performant. John also excels at creating rich data visualizations using D3, Three.js, and other front-end technologies. He is well versed in AngularJS and React and enjoys building rich single page applications that solve big problems for clients. He has plenty of back-end knowledge to make integrations go fast and is also dabbling in AWS serverless technology. His other interests include Machine Learning, natural language processing, and graph databases.

At Unicon, John has worked with publishers, social networks, and colleges. In most cases he develops custom solutions often leveraging a single page application approach. John led the UX development for the California Community Colleges Assess project. The project demanded WCAG 2.0 compliance and John delivered a smooth user experience for both students, faculty, administrators that was responsive, mobile friendly, and accessible. He now works with Facebook in delivering a socially oriented CMS for their Developer Circles community. This application is a single page progressive web application built using AWS serverless technologies. It too, is responsive and great on mobile devices with low bandwidth and connectivity. It was also built to scale to tens or hundreds of thousands of users. John has an appetite for professional development. He posts frequently on interesting side projects he works on and has presented to the company UX team as well as at Desert Code Camp several different times.

John Martin loves teaching as much as he does learning. He worked previously as a Coordinator Sr. at Arizona State University managing the tutoring centers and a freshman transition program that housed and provided 9 credit hours for over 400 incoming freshmen students. Before that, he worked as a secondary advanced Mathematics instructor at Crested Butte Academy in Colorado. He also had experience as an assistant camp director and intermediate skateboarding coach for an extreme sports summer camp in California.

John graduated with a B.S. in Secondary Education Mathematics from ASU and stayed to complete his M.S. in Discrete Mathematics with an emphasis in Abstract Algebra. He has begun the process of getting the AWS Developer certification.