Current research engineers and live projects

We currently have nearly 50 live projects with 40 companies.

Our research engineers are involved in everything from procedural generation of content for international games companies to assistive technologies to help stroke rehabilitation via the future of interactive technologies for major broadcasters and virtual reality for naval training.

Get to know some of our current research engineers below and see our video filmed during our CDE Winter Networking Event at the British Film Institute, London.  Hear more from our students and alumni.

Research Engineers looking for placements:

A new cohort of students joined the CDE Engineering Doctorate (EngD) in Digital Entertainment programme in September 2019. If you're a company or organisation interested in collaborating with us, do get in touch with our project co-ordinators

Luke Worgan - Enhancing Multi-sensory Immersion within Augmented and Virtual Reality Environments Using Ultrasonic Waves

 

 


2021
Kavisha Jayathunge
Kavisha Jayathunge
Supervisor:
Dr Richard Southern

Emotionally expressive speech synthesis

Emotionally expressive speech synthesis for a multimodal virtual avatar.

Virtual conversation partners powered by Artificial Intelligence are ubiquitous in today’s world, from personal assistants in smartphones, to customer-facing chatbots in retail and utility service helplines. Currently, these are typically limited to conversing through text, and where there is speech output, this tends to be monotone and (funnily enough) robotic. The long-term aim of this research project is to design a virtual avatar that picks up information about a human speaker from multiple different sources (i.e. audio, video and text) and uses this information to simulate a realistic conversation partner. For example, it could determine the emotional state of the person speaking to it by examining their face and vocal cadence. We expect that taking such information into account when generating a response would make for a more pleasant conversation experience, particularly when a human needs to speak to a robot about a sensitive matter. The virtual avatar will also be able to speak out loud and project an image of itself onto a screen. Using context cues from the human speaker, the avatar will modulate its voice and facial expressions in ways that are appropriate to the conversation at hand.

The project is a group effort and I'm working with several other CDE researchers to realise this goal. I'm specifically interested in the speech synthesis aspect of the project, and how existing methods could be improved to generate speech that is more emotionally textured.

Background: MEng in Electronics and Software Engineering from the University of Glasgow


2020
Manuel Rey Area
Manuel Rey Area
Supervisor:
Dr Christian Richardt

Deep View Synthesis for VR Video

With the outbreak of VR, it is key for users to be fully inmersed in the virtual world. The users must be able to move their head freely around the virtual scene unveiling occluded surfaces, perceiving depth cues and observing the scene to its last detail. Furthermore, if scenes are captured via casual devices (smartphone) anyone could convert their 2D pictures to a 3D fully immersive experience bringing a new digital world representation closer to ordinary users. The aim of this project is to synthesize novel views of a scene from a set of input views captured by the user. Eventually, the whole scene 3D geometry must be reconstructed and depth cues must be preserved to allow 6-DoF (degrees of freedom) head motion avoiding the well-known VR sickness. The main challenge lies in generating the synthetic views with a high level of detail, light reflections, shadows, occlusions... resembling to reality as much as possible.

Background:

MSc Computer Vision, Autonomous University of Barcelona

BSc Telecommunications Engineering, University of Vigo


2020
Will Kerr
Will Kerr
Supervisor:
Dr Wenbin Li

Autonomous Filming Systems for Cinematography

 

Overview

The PhD topic is Autonomous Filming Systems (AFS) for the application of professional cinematography. Its focus is on developing a camera equipped ground-based (wheeled) robot, which is intended to provide better accuracy, safety, efficiency, and artistic control than the current manual method of filming on-set (e.g. 3 people ; one moving a camera ‘dolly’, one controlling the camera, and one holding cables).

Current research

Examples of autonomous filming exist already, mostly concentrating on UAV (drone) platforms. These have been realised by commercial companies (DJI, Skydio, Yuneec etc), and the research community (e.g.  [1] [2], [3], [4]). They range from the simple waypoint-based trajectory planning, to some integration of artistic cinematography principles in the camera pose decisions. Integration of localisation, mapping, and visual processing to aid in trajectory planning features in most systems.

Research Plans

This research will advance from the above State-Of-The-Art by focussing on the more nuanced artistic aspects of how professional video is captured. Firstly, it will analyse and distil existing movie content to understand professional techniques using computer vision analysis techniques (e.g. the most basic example of this is the rule-of-thirds, but more expect to be understood by the developed algorithms). Investigations will cover computer vision analysis (e.g. colour, focus, movement and framing), and the emotional impact that various cinematic techniques invoke on viewers. Secondly, this learnt-behaviour will be applied to a physical and simulated ground-robot in a representative on-set filming environment, attempting to automate the decision-making process of how artful camera movement is realised. Thirdly, evaluation and iteration will refine the performance in conjunction with industrial collaboration, ideally in a real-life filming task.


2019
Kris Kosunen
Kris Kosunen
Supervisor:
Dr Christof Lutteroth; Prof Eamonn O'Neill
Industrial Supervisor:
Dr Chris Dyer

VR Empathy Training for Clinical Staff

Research Project: VR Empathy Training for Clinical Staff

Industrial Partner: Royal United Hospital, Bath

Empathy for patients is important for good clinical outcomes. It can sometimes be challenging to develop empathy and understanding for cognitive or mental disorders because it is hard to imagine what they feel like. For example, people affected by dementia or psychosis may not show physical symptoms but may behave unusually. Without an emotional understanding of such conditions, it can be difficult for clinical staff to treat people effectively. 

Virtual reality (VR) is being used increasingly for learning and training. VR makes it possible to immerse users in complex interactive scenarios, allowing them to safely experience and practice situations that would be difficult to arrange in reality. This creates new opportunities for VR in clinical training. In this project, we will develop a VR simulator that helps clinical staff to develop empathy and understanding for people affected by cognitive or mental disorders.

Background

Informatics: Serious Games, University of Skovde, Sweden

I studied Serious Games at the University of Skövde in Sweden, this gave me a firm background in how games technology and ideas can be used in a serious way. I then worked as a social media community manager in the Nordic region for Lionbridge on a contract with HTC Vive, during this time I learned about the ways that Nordic companies are making use of VR tech in all manner of sectors and was inspired to follow this research trend.


2019
Philip Lorimer
Philip Lorimer
Supervisor:
Dr Wenbin Li; Dr Alan Hunter
Industrial Supervisor:
Andrew Nancollis

Autonomous Robots for Professional Filming

Research Project: Autonomous Robots for Professional Filming

Industrial Partner: Motion Impossible

The filmmaking industry has been growing very rapidly for decades. A typical production pipeline may involve a considerable effort by industry professionals to plan, capture and post-produce an outstanding commercial film. The current workflow is heavily reliant on human input along with a fine-tune robotics platform. 

The aim of this research project is to explore the use of autonomous robots for the application of professional filming, with the primary focus of: 

  1. Designing an advanced ground robot armed with multiple types of sensors that support accurate perception from their surroundings.

  2. A fully autonomous pipeline for a robot to plan moving trajectories and perform the capture.

MSc Project: Perception Module for Autonomous formula Student vehicle.

Background: MSc Computer Science, University of Bath

 


2019
Michal Gnacek
Michal Gnacek
Supervisor:
Dr Ellen Seiss, Dr Theodoros Kostou, Dr Emili Balaguer-Ballester
Industrial Supervisor:
Dr Charles Nduka MA,MD, FRCS

Affect recognition in virtual reality environment

Industrial Partner

emteq

 

Research Project

I am working with Emteq on improving affect recognition using various bio-signals with the hope of creating better experiences and creating completely new ones that have the potential to tackle physical and mental health problems in previously unexplored ways.

The ever-increasing use of virtual reality (VR) in research as well as mainstream consumer markets has created a need for understanding users’ affective state. This would not only guide the development of the technology but also allow for a creation of brand-new experiences in entertainment, healthcare and training applications.

This research project will build on the existing research conducted by Emteq with their patented device for affect detection in VR. In addition to the already implemented sensors (electromyography, photoplethysmography and inertial measuring unit) which need to be evaluated, other modalities need to be explored for potential inclusion and their ability to determine emotions.

Background

I have 4 years of experience as a Games & Software Engineer.

BEng (Hons) Computer Games Development at Ulster University


2019
Nick Lindfield
Nick Lindfield
Supervisor:
Prof Wen Tang

Deep Neural Networks for Computer Generated Holography

Project Interest:

Both Immersive Techonogies and Artificial Intelligence have made huge strides over the last couple decades. My desire is to develop software applications that utilise the capabilities of at least one of these to create solutions to real world problems.

Virtual Reality:

The power of VR is the unrivalled level of presence it provides. Presence is the sensation that a user enhabits a virtual experience, both physically and emotionally.

Augmented Reality:

The potential of this technology lies in its ability to interact with digital information in a way that makes you more aware of your surroundings. Applications can encourage interaction with your real world environment or alternatively be built around social interactions.

Artificial Intelligence:

Neural networks can provide real time analysis of data that would otherwise either require pre-processing or be impossible. This is done by digesting large amounts into trends in the form of mathmatical weights.

Background:

  • Software Engineer
  • MSc Computer Science
  • MChem Materials Chemistry

MSc Computer Science Project - Visualisation and tactile exploration of Atomic Force Microscopy Data Using Virtual Reality and Haptic Feedback:

Developed a script to create a 3D model from a file containing x,y,z data. This model can be visualised using a Virtual Reality headset. At the same time the user can examine the contour of the surface via touch. These sensations were created using the Ultrahaptics system, creating a low level sensation of touch in mid air.


2019
Alexz Farrall
Alexz Farrall
Supervisor:
Dr Simon Jones; Dr Ben Ainsworth
Industrial Supervisor:
Dr. Sabarigirivasan Muthukrishnan

The guide to mHealth implementation

Research Project: The guide to mHealth implementation – designing, developing, and evaluating a new evidence-based mobile intervention to support medical students suffering from reduced well-being.

Project partner: Avon and Wiltshire Mental Health Partnership NHS Trust (AWP)

The project will not only be a collaboration between the University of Bath and AWP, but also work alongside Bristol’s Medical School to directly incorporate stakeholders into the design and evaluation of a new digital intervention. Smartphone apps are an increasingly popular means for delivering psychological interventions to patients suffering from reduced well-being and mental disorders. One such population that suffers from reduced well-being is that of the medical student populace, with recent studies identifying 27.2% to have depressive symptoms, 11.1% to have suicidal ideation, and 45-56% to have symptoms suggestive of burnout. Moreover, through the utilisation of advanced human-computer interaction (HCI) and behaviour therapy techniques, this project aims to contribute innovative research to increase the effectiveness of existing digital mental health technologies. Thus, it is the hopes of the research team to actualise and implement the smartphone app into the NHS and create new opportunities to support the entire medical workforce.

MSc Thesis: The development of a mindfulness-based smartphone intervention to support doctoral student suffering from reduced wellbeing.

Background: 

BEng Electronics and Communication Engineering (IET accredited), University of Kent

University of Bath’s vertically integrated project (VIP) well-being team leader


2019
Kari Noriy
Kari Noriy
Supervisor:
Xiaosong Yang

 

Industrial Partner:

Charisma AI

Research Interest: 

Artificial Intelligence (AI) is set to change how businesses operate, this comes in many different flavours such as natural language processing, object detection, content creation and many more fields that have yet to be discovered.  


AI is no longer something written in fiction novels, it is here. In the early 80’s the data needed to train and experiment was hard to come by. Collecting, storing and the computational cost needed at the time was high. According to Forbes in 2018 we generated “2.5 quintillion bytes of data each day” and we are expecting this number to increase as more Internet Of Things (IOT) devices are added; Moore’s Law states that processor speed and overall processing power will double every two years. 

During my EngD, I aim to research content generation for film and games. Focusing on character driven locomotion, such as facial animation. My aim is to add a new layer of realism to animation, in current works, we find that motionless computer generated images of humanoids characters look real until motion is added to these characters. There are subtle details that current animation techniques fail to apture, falling into the uncanny valley, where we get an unsettling feeling; my aim being to eliminate this.

 

Background:

BA (Hons) Computer Visualisation and Animation, Bournemouth University

Download Kari's Research Profile


2019
Isabel Fitton
Isabel Fitton
Supervisor:
Dr Christof Lutteroth; Dr Michael Proulx
Industrial Supervisor:
Jeremy Dalton

Improving skills learning through VR

Research Project: Improving skills learning through VR

Industrial Partner: PwC UK

Project outline and objectives:

VR promises to support people in learning new skills, by immersing learners in virtual environments where they can practice the skills and receive feedback on their progress. In this project we will investigate how VR can help learners acquire manual skills as required in engineering and manufacturing for example. We will design VR learning simulations and new approaches to support learners and test these simulations to determine whether skills learned in VR transfer to the real world. We will also compare virtual training simulations to more traditional learning aids such as instructional videos. 

Background

Multi-disciplinary background in Psychology and Human Computer Interaction (HCI).

BSc Psychology with Placement, University of Bath 

BSc Project: Immersive virtual environments and embodied agents for e-learning applications 


2019
Luke Worgan
Luke Worgan
Supervisor:
Prof Mike Fraser; Prof Jason Alexander

Enhancing Perceptions of Immersion

Enhancing Perceptions of Immersion within Multi-Sensory Environments through the Introduction of Scent Stimuli Using Ultrasonic Particle Manipulation. 

Present Virtual Reality environments focus on providing a rich, immersive audio-visual experience, however, the technology required to enhance a user's perception of smell, touch, or taste is yet to reach the same level of sophistication and remains largely absent from virtual and augmented reality systems. Existing technologies rely on fan-based systems which may lack temporal and spatial resolution. This research project explores ultrasonic particle manipulation – the process of isolating and manipulating the behaviour of individual particles within an acoustic field, as a method for enhancing olfactory resolution.  Research will focus on the development of a discreet ultrasonic system designed to introduce scent stimuli into multi-sensory environments and increasing user perceptions of immersion. 

Background: 

I have a multi-disciplinary background as a musician, artist, and computer scientist with a BSc in Sound Design (LSBU), MSc in Creative Technology (UWE) as well as completing an MSc in Human-Computer Interaction as part of my doctorate program at the University of Bath. 


lukeworgan.com


2019
Ben Snow
Ben Snow
Supervisor:
Prof Jian Chang

Griffon Hoverwork Simulator for Pilot Training

Industrial Partner

Griffin Hoverwork

Griffon Hoverwork are both pioneers and innovators in the hovercraft space. With over 50 years of experience making, driving, and collecting data about hovercrafts, GHL has the resources to build a realistic and informative training simulator. We will design a virtual environment to train prospective hovercraft pilots, give feedback, and have fun driving a physically realistic hovercraft. The simulator will incorporate the experience of GHL's highly trained pilots and a wealth of craft data collected from real vehicles to provide a simulation tailored to a Griffon 2000TD craft. GHL's training protocols will be used to provide specific learning objectives and give feedback to novice and professional pilots on all aspects of craft operation. Creating a realistic hovercraft model will allow the simulation environment to be used as a research testbed for future projects.

Background

I gained an Mphys from the University of Manchester in 2019. My research project focused on nanoscale thermoelectric transport in ferromagmets for graphene spintronics Applications. I spent my 3rd year abroad at the University of Maryland College Park where I worked on the USA's largest, student run cyclotron.


2018
Aaron Demolder
Aaron Demolder
Supervisor:
Dr Hammadi Nait-Charif Dr Valery Adzhiev
Industrial Supervisor:
Dr. Andrzej Kaczorowski

Data capture and 3D integration for VFX and Emerging Technology

Data capture and 3D integration for VFX and Emerging Technology

Research Project: Image Rendering for Holographic Display
 

Industrial Partner: VividQ


A significant portion of research in Computer Generated Holography fails to apply advancements in image rendering made in Computer Graphics, and instead attempt to implement their own renderers. These provide solutions that are incompatible with the modern creative production workflow, unable to meet the expected feature-set and quality of a final-frame production-quality renderer – all whilst taking longer to compute. There is also no analysis of holographic display traits from a creative perspective.

 

This project aims to first identify the artistic characteristics of holographic display and provide methods to work within its unique qualities. Once the details of this new medium are identified, we can build pipeline tools with appropriate artistic controls to define correct practice; and implement compatibility with existing industry practices. Given the foundational artistic and technical pipelines, we can then provide methods of advancing image/layer based holographic generation using production-quality renderers. Adding support in hologram generation software for multi-perspective holographic display, transparencies and volumes.
 

With no existing consumer holographic displays, this work will also demonstrate the feasibility of applications of holography on real prototype devices, in areas such as Augmented Reality.

Background: Art, Design, Animation, VFX, Computer Science

BA (Hons) Computer Animation and Visualisation

Download Aaron's Research Profile


https://aarondemolder.com


2018
Robert Kosk
Robert Kosk
Supervisor:
Dr Richard Southern
Industrial Supervisor:
Willem Kokke

Biomechanical Parametric Faces Modelling and Animation

Industrial Partner:

Humain

Research Project: Biomechanical parametric faces modelling and animation

Project Overview

Modelling and animation of high-quality, digital faces remains a tedious and challenging process. Although sophisticated data-capture and manual processing allow realistic results in offline production, there is demand in the rapidly developing virtual reality industry for fully automated and flexible methods.

My project aims to develop a parametric template for physically based facial modelling and animation, which will:

- automatically generate any face, either existing or synthetic,

- intuitively edit structure of a face without affecting the quality of animation,

- reflect non-linear nature of facial movement,

- retarget facial performance, accounting for anatomy of particular faces.

Ability to generate faces with governing, meaningful parameters such as age, gender or ethnicity is a crucial objective in wider adaptation of the system among the artists. Furthermore, the template can be extended with numerous novel applications, such as animation retargeting driven by muscle activations, fantasy character synthesis or digital forensic reconstruction.

Background: Computer Science

BA (Hons) Computer Visualisation and Animation

Download Robert's Research Profile


www.robertkosk.com


2018
Karolina Pakenaite
Karolina Pakenaite
Supervisor:
Prof Peter Hall, Dr Michael Proulx

An Investigation into Tactile Images for the Visually-Impaired

Research Project: An Investigation into Tactile Images for the Visually-Impaired Community  

My aim is to provide the visually impaired community with access to photographs using sensory substitution. I am investigating the functionality of photographs being translated into simple pictures, which will then be printed in tactile form. Some potential contributions could be the introduction of denotation and projection with regard to style. Beneficiaries could also extend beyond computing into other academic disciplines like Electronic Engineering and Education. Accessible design is also essentially inclusive design for all. Sighted individuals find themselves feeling tempted to touch art pieces in museums or galleries and while most artworks are originally created to be touched, we often discern a cardinal no-touch rule to preserve them. Accessibility features may be designed for a particular group of the community, but they can and usually do end up being used by a wider range of people. Towards the end of my research, I hope to adapt my work for the use of primary blind school children.  

To get simplified pictures, I recently tried translating photographs into two different styles: ‘Icons Representation’ and ‘Shape Representation’.  

For Icons Representation of a photograph, I used a combination of object and salient detection algorithms to identify salient objects only. I used Mask R-CNN object detection and combined its output with Saliency Map using PiCANet detection algorithm, which then gave us probabilities that a given pixel belongs to a salient object within an image. All detected salient objects are replaced with corresponding simplified icons onto a blank canvas of the same size as the input image. Geometric transformations on icons are applied to avoid any overlaps. Background edge map was added to give further context about the image.  

For Shape Representation of an object, I experimented with different image segmentation methods and replaced each segment with the most appropriate canonical shape, using a method introduced by Voss and Suße. That is, segments are normalised into a canonical frame by using a whitening transform to get a normalised shape. We then compared these normalised shapes with canonical shapes in the library and decide which is correlated the most. An inverse transform was then applied on library shapes. For the last step, it looks like we have moulded library shapes accordingly so that it matches closely to its segments. We now have simplified images of objects using a combination of shapes. We plan to have these Shape Representations printed in 3D.  

Due to Covid-19, we were unable to test these tactile images with participants using touch, but few obvious imitations were found. We will continuously investigate and improve our simplified images. Computer Vision will allow us to create autonomous functionality to translate photographs into tactile images and hope that this will reduce the cost of tactile image production. We will also use knowledge in Psychology of Object Recognition and experiment with human participants to make our implementation as effective as possible by the real users. A combination of Computer Science and Psychology will prepare us to adapt our work for the use of education for primary school children. This could be teaching congenitally blind children to understand different sizes of objects that are rarely touched (e.g elephant or mouse) or teach them to indicate the distance of an object on a paper by drawing objects small that are far away 

Background: Maths/ Computer Science

MSci Mathematics with a Year in Computer Science, Birmingham

Download Karolina's Research Profile


2018
Katarzyna Wojna
Katarzyna Wojna
Supervisor:
Dr Christof Lutteroth, Dr Michael Wright
Industrial Supervisor:
Dr David Beattie

Natual user interaction and multimodality

Research Project: Multimodal interactions for digital out of home

Industrial Partner: Ultraleap

This project explores how multi-modal interaction can enable more meaningful collaboration with systems that use Natural User Interaction (e.g. voice, gesture, eye gaze, touch etc.) as the primary method of interaction.  

In the first instance this project will explore Digital Out Of Home systems.  However, the research and insights generated from this application domain could be transferred to virtual and augmented reality as well as more broadly to other systems where Natural User Interaction is used as the primary method of interaction.

One of the challenges of such systems is to design adaptive affordances so that a user knows how to interact with an information system whilst on the move with little or no training.  One possible solution is to provide multiple modes of interaction, and associated outputs, which can work together to enable meaningful or "natural" user interaction. For example, a combination of eye gaze and gesture to "understand" that the user wishes to "zoom in" to a particular location on a map.

Addressing this challenge, this project explores how multi-modal interaction (both input and output) can enable meaningful interactions between users and these systems.  That is, how can a combination of one or more inputs and the associated outputs allow the user to convey intent, perform tasks and react to feedback as well as the system providing meaningfully feedback to the user about what it "understands" to be the user’s intended actions?

Demo paper 'Particle-ulary Haptics' accepted to the Eurohaptics 2020 Conference

"Mid-air haptic feedback technology produces tactile sensations that are felt without the need for physical interactions, and bridges the gap with digital interactions, by making the virtual feel real. However, existing mid-air haptic experiences often do not reflect user expectations in terms of congruence between visual and haptic stimuli. To overcome this, we investigate how to better present the visual properties of objects, so that what one feels is a more accurate prediction of what one sees. In the following demonstration, we present an approach that allows users to fine-tune the visual appearance of different textured surfaces, and then match the set corresponding mid-air haptic stimuli in order to improve visual-haptic congruence"

Eurohaptics 2020 Demonstration video

 

 

View Katarzyna's Research Outputs


2018
Olivia Ruston
Olivia Ruston
Supervisor:
Dr Leon Watts

Designing Interactive Wearable Technology

Research Project: Designing Interactive Wearable Technology for Embodied Movement Applications 

This research focuses on wearables and e-textiles, considering fashion design/construction processes and their socio-cultural impact. My most recent work has involved creating and experimenting with bodice garments to understand how information about their motion might help people to learn about the way they move, so that they can learn to move better. 

Background: Computer Science

BSc Computer Science with Placement, University of Bath  

BSc Project: An Investigation of User Interactions with Wearable Ambient Awareness Technologies 


2018
Jack Brett
Jack Brett
Supervisor:
Dr Christos Gatzidis
Industrial Supervisor:
Dr Ning Xu

Augmented Music Interaction and Gamification

Industrial Partner:

Roli

Research Project: Augmented Music Interaction and Gamification

I am currently exploring barriers of entry to creating music as well as assessing how one learns an instrument at beginner level.

Background: Games Technology

BSc (Hons) Games Technology. Previous research work was conducted mostly with the Psychology Department where programs were created for mobile/PC use and then later branched into virtual reality.  Most recently, I have been focusing on a VR program which is used in clinical trials to gauge the severity of certain mental illnesses such as dementia.

 

Download Jack's Research Profile

View Jack's Research Outputs


http://jackbrett.co/


2018
Neerav Nagda
Neerav Nagda
Supervisor:
Dr Xiaosong Yang, Dr Jian Chang, Dr Richard Southern
Industrial Supervisor:
James Coore

Asset Retrieval System

Industrial Partner:

Absolute Post

Research Project: Asset Retrieval Using Knowledge Graphs and Semantic Tags

The nature of my project is to be able to search, view and retrieve digital assets within a database of the entire company’s works, from a single application.

There are three major challenges which this project aims to solve:

 

  1. Searching and retrieving specific data.

The current method is not specific. Data can be found, but usually this set contains both the required data and a larger set of irrelevant data. The goal is to avoid the retrieval of irrelevant data, which will significantly reduce data transfer times.

  1. Understanding the contents of a file without needing to open it in specialised software.

This can be achieved by generating visual previews to see the contents of a file. The generation of semantic tags will allow for quicker and efficient searching.

  1. Finding connections in data, such as references and dependencies.

Some files may import or reference data from other files. This linked data can be addressed by creating a Semantic Web or Knowledge Graph. Often there are entities which are not necessarily represented by a file, such as a project, but would have many connections to other entities. This allows such entities to become handles in the Semantic Web which can be used to locate a collection of connected entities.

The disciplines that this project covers are:

  • Web science

  • Big Data

  • Data Mining

  • Computer Vision

  • Natural Language Processing

The integration of such a system in industry would significantly reduce searching and retrieval times for data. This can be used in many scenarios, for example:

  • Retrieving data from backups for further work

A common task is to retrieve a project from archives. Most of the time the entire project is not required to be unarchived, so finding the specific data significantly reduces unarchiving times.

  • Reducing duplication of data

If a digital asset can be reused, it can be found from this system and imported into another project. This saves the time of remaking previous work.

  • Reviewing work

Searches can be filtered, for example finding all work produced in the previous day or sorting works by date and time. Creating a live feed would allow for quicker access to data to review works.

 

Background: Computer Science

BA Computer Visualisation And Animation. I specialised in programming and scripting, developing tools and plugins for content creation applications. My Major project in my final year sparked research in machine learning and neural networks for motion synthesis.

Download Neerav's Research Profile


2018
Sydney Day
Sydney Day
Supervisor:
Lihua You
Industrial Supervisor:
Matt Hooker

Humanoid Character Creation Through Retargeting

Industrial Partner:

Axis Animation

Research Project: Humanoid Character Creation Through Retargeting

This project explores the automatic creation of rigs for humanoid characters with associated animation cycles and poses. Through retargeting a number of techniques can be covered:

- automatic generation of facial blendshapes from a central reference library

- retargeting of bipedal humanoid skeletons

- transfer of weights between characters of differing topologies.

The key goals are to dramatically reduce the amount of time needed to rig certain types of character, thus freeing up the riggers to work on fancier, more complex rigs that cannot be automated. 

Background: Computer Science

BA (Hons) Computer Animation and Visualisation

Download Sydney's Research Profile


2017
Kenneth Cynric Dasalla
Kenneth Cynric Dasalla
Supervisor:
Dr Christof Lutteroth

Effects of Natural Locomotion in VR

Research Project: Effects of Natural Locomotion in VR

MSc Digital Entertainment - Masters Project:

Multi-View High-Dynamic-Range Video, working with Dr Christian Richardt

Background: Computer Science

BSc in Computer Science, Cardiff University specializing in Visual Computing. Research project on Boosting Saliency Research on the development of a new dataset which includes multiple categorised stimuli and distortions. Fixations of multiple observers on the stimuli were recorded using an eye tracker.

Download Kenneth's Research Profile


https://zubr.co/author/kenneth/


2017
Marcia Saul
Marcia Saul
Supervisor:
Dr Fred Charles, Dr Xun He
Industrial Supervisor:
Stuart Black

A Two-Person Neuroscience Approach for Social Anxiety

Can we use games technology and EEG to help us understand the role of interbrain synchrony on people experiencing the symptoms of social anxiety?

Industrial Partner:

BrainTrainUK

Research Project: A Two-Person Neuroscience Approach for Social Anxiety: Prospects into Bridging Intra- & Inter-brain Synchrony with Neurofeedback

My main field of interest is computational neuroscience, brain-computer interfaces and machine learning with the use of games in applications for rehabilitation and improving the quality of life for patients/persons in care.

Social anxiety has become one of the most prominent of anxiety disorders, with many of its symptoms overlapping into the realms of other mental disorders such as depression, autism spectrum disorder, schizophrenia, ADHD, etc. Neurofeedback (NF) is well known to modulate these symptoms using a metacognitive approach of relaying a participant’s brain activity back to them for self-regulation of the target brainwave patterns. In this project, we explore the potential of integrating Intra- and inter-brain Synchrony to explore the potential of a more effective NF procedure. By using realistic multimodal feedback in the delivery of NF, we can amplify the concept of collaboration or co-operation during tasks – utilising the ‘power of two’ in two-person neuroscience – to help reach our goal of synchronising brainwaves between two participants and aiming to alleviate symptoms of social anxiety.

MRes - Masters project:

Using computational proprioception models and artificial neural networks in predictive two-dimensional wrist position methods.

Background: Psychology and Computational Neuroscience

BSc in Biology with Psychology, Royal Holloway University of London

MSc in Computational Neuroscience & Cognitive Robotics, University of Birmingham

Download Marcia's Research Profile

View Marcia's Research Outputs


2017
Rory Clark
Rory Clark
Supervisor:
Dr Feng Tian

3D UIs within VR and AR with Ultrahaptics Technology

Industrial Partner:

Ultrahaptics

Research Project: 3D User Interfaces for Virtual and Augmented Reality

Research into how a 3D user interface (UI) can be presented, perceived, and realised within virtual and augmented reality (VR and AR), while integrating Ultrahaptics mid-air haptics technology. Mid-air haptics presents the opportunity of allowing users to feel feedback and information directly on their hands, without having to hold a specific controller. This means the hands can be targeted for both tracking, and haptics, while still allowing full freedom of control.

Background: Games Programming

BSc Games Programming, Bournemouth University, focusing on the use and development of; games and game engines, graphical rendering, 3D modelling, and a number of programming languages. Final year dissertation on virtual reality event planning simulation, utilising the HTC Vive. Previous projects on systems ranging from the web and mobile, to smart-wear devices and VR headsets.

Download Rory's Research Profile


https://rory.games


2017
Thomas Williams
Thomas Williams
Supervisor:
Dr Elies Dekoninck, Dr Simon Jones, Dr Christof Lutteroth
Industrial Supervisor:
Prof Nigel Harris, Dr Hazel Boyd

AR as a cognitive prosthesis for people living with dementia

Industrial Partner:

Designability

Research Project: Exploring the Use of Augmented Reality to Support People Living with Dementia to Complete Tasks in the Home

There have been considerable advances in the technology and range of applications of virtual and augmented reality environments. However, to date, there has been limited work examining design principles that would support successful adoption. Assistive technologies have been identified as a potential solution for the provision of elderly care. Such technologies have in general the capacity to enhance the quality of life and increase the level of independence among their users.

The aim of this research project is to explore how augmented reality (AR) could be used to support those with dementia with daily living tasks and activities in the home. This will specifically focus on those living with mild to moderate dementia and their carers. Designability have been working on task sequencing for different types of daily living tasks and have amassed considerable expertise in how to prompt people with cognitive difficulties, through a range of everyday multi-step tasks. This project would allow us to explore how AR technology could build on that expertise.

The research will involve testing the design of augmented reality prompts in domestic settings. Augmented reality technologies are all still in their early stages of technology maturity, however they are at the ideal stage of development to explore their application in such a unique field as assistive technology.

MSc Digital Entertainment - Masters project:

A novel gaze tracking system to improve user experience at Cultural Heritage sites, with Dr Christof Lutteroth

Background: Maths/Physics

University of Bath BSc (Hons) Mathematics and Physics Four years with placement

Download Thomas' Research Profile


http://blogs.bath.ac.uk/ar-for-dementia/


2017
Alexandros Rotsidis
Alexandros Rotsidis
Supervisor:
Dr Christof Lutteroth; Dr Christian Richardt
Industrial Supervisor:
Mark Lawson

Creating an intelligent animated avatar system

Industrial Partner:

Design Central (Bath) Ltd t/a DC Activ / LEGO

Research Project:

Creating an intelligent avatar: using Augmented Reality to bring 3D models to life. The creation of 3D intelligent multi-lingual avatar system that can realistically imitate (and interact with) Shoppers (Adults), Consumers (Children), Staff (Retail) and Customers (Commercial) as users or avatars. Using different dialogue, appearance and actions based on given initial data and feedback on the environment and context in which it is placed creating ‘live’ interactivity with other avatars and users.

While store assistant avatars and virtual assistants are commonplace in present times, they act in an often scripted and unrealistic manner. These avatars are also often limited in their visual representation (ie usually humanoid).

This project is an exciting opportunity to apply technology and visual design to many different 3D objects to bring them to life to guide and help people (both individually and in groups) learn from their mistakes in a safe virtual space and make better quality decisions increasing commercial impact.

Masters Project: AR in Human Robotics

Augmented Reality used in Human Robotics Interaction, working with Ken Cameron

Background: Computer Science

BSc (Hons) Computer Science from Southampton University; worked in the industry for 5 years as a web developer. A strong interest in Computer Graphics and Machine Learning led me to the EngD programme.

Download Alex's Research Profile


http://www.alexandrosrotsidis.com/


2017
Michelle Wu
Michelle Wu
Supervisor:
Dr Zhidong Xiao, Dr Hammadi Nait-Charif

Motion Representation Learning with Graph Neural Networks

Research Interest: Motion Representation Learning with Graph Neural Networks and its Applications  

The animation of digital characters can be a long and demanding process: the human eye is very sensitive to unnatural motions, and this means that animators need to pay extra attention to create realistic and believable animations. Motion Capture can be a helpful tool in this matter, as it allows to directly capture movements performed by actors and converts them into mathematical data. However, dealing with dense motion data presents its own challenges, and this usually translates in studios having difficulty reusing the large collections of motion data available, often resorting, in the end, to capturing new data instead. 

 

To promote the recycling of motion data, time-consuming tasks (e.g. manual data cleaning and labelling) should be automated by developing efficient methods for classifying and indexing data to allow for the searching and retrieval of motions from databases. At the core of these approaches is the learning of a discriminative motion representation. A skeleton can naturally be represented as a graph, where nodes correspond to joints and edges to bones. However, many human actions need far-apart joints to move collaboratively and to capture these internal dependencies between joints (even those without bone-connections), we can leverage the potential of Graph Neural Networks to adaptively learn a model that can extract both spatial and temporal features. This will allow us to learn potentially richer motion representations that will form the basis for the tasks of motion classification, retrieval and synthesis.

Background:  Computer Animation, Computer Science

BSc Software Development for Animation, Games and Effects, Bournemouth University.

Research Assistant in Human Computer Interaction/Computer Graphics in collaboration with the Modelling Animation Games, Effects (MAGE) group within the National Centre for Computer Animation (NCCA), focusing on the development and dissemination of the SHIVA Project, a software that provides virtual sculpting tools for people with a wide range of disabilities.

View Michelle's Research Outputs


2017
Sameh Hussain
Sameh Hussain
Supervisor:
Prof Peter Hall
Industrial Supervisor:
Andrew Vidler

Learning to render in style

Industrial Partner:

Ninja Theory

Research Project:

Investigations into the development of high-fidelity style transfer from artist drawn examples.

Style transfer techniques have provided the means of re-envisioning images in the style of various works of art. However, these techniques can only produce credible results for a limited range of images. We are improving on existing style transfer techniques by observing and understanding how artists place their brush strokes on a canvas.

So far we have been able to build models that are able to learn styles pertaining to line drawings from a few example strokes. We have then been able to apply the model a variety of inputs to create stylised drawings.

Over the upcoming year, we will be working on extending this model so that we can do more than just line drawings. We will also be developing working with our industrial partner to develop interactive so their artists can leverage the research we have produced.

MSc Digital Entertainment - Masters project: A parametric model for linear flames, working with Prof Peter Hall

Background: Mechanical Engineering

MEng in Mechanical Engineering, University of Bath; one year placement with Airbus Space and Defence developing software to monitor and assess manufacturing performance.


2017
Valentin Miu
Valentin Miu
Supervisor:
Dr Oleg Fryazinov
Industrial Supervisor:
Mark Gerhard

Realtime Scene Understanding with Machine Learning

Industrial Partner:

Playfusion Ltd

Research Project:

Realtime Scene Understanding with Machine Learning on Low-Powered Devices

Given the speed requirements of realtime applications, server-side deep learning inference is often not suitable due to high latency, potentially even in a 5G world. With the increased computing power of smartphone processors, the leveraging of device GPUs, and the development of mobile-optimized neural networks such as Mobilenet, realtime on-device inferencing has become possible.

Within this scope, machine learning techniques for scene understanding are leveraged, such as generic object detection. They are implemented as multiplatform augmented reality apps, offering a unified experience by using Unity and C++ plugins, the machine learning functionality being accomplished through the TensorFlow Lite C API.  In the current project, machine learning and other methods are combined to track the position and pose of a hair curler, with the purpose of developing an app to educate regular users in the usage of professional hairdressing equipment.

Background: Physics

MSci Physics, University of Glasgow, graduating with a 1st degree. During this time I familiarized myself with compositing and 2D/3D animation, in a non-professional setting. In my first year at the CDE, I successfully completed masters-level courses in Maya, OpenGL and Houdini, and have been learning CUDA GPU programming and machine learning. 

 

 

 

 

Download Valentin's Research Profile

View Valentin's Research Outputs


2016
Kyle Reed
Kyle Reed
Supervisor:
Prof Darren Cosker
Industrial Supervisor:
Dr Steve Caulkin

Improving Facial Performance Animation using Non-Linear Motion

Industrial Partner:

Cubic Motion

Research Project: Improving Facial Performance Animation using Non-Linear Motion

Cubic Motion is a facial tracking and animation studio, most famous for their real-time live performance capture. The aim of this research is to improve the quality of facial motion capture and animation through the development of new methods for capture and animation.

We are investigating the utilisation of non-linear facial motion observed from 4D facial capture for improving the realism and robustness of facial performance capture and animation. As the traditional pipeline relies on linear approximations for facial dynamics, we hypothesise that using observed non-linear dynamics will automatically factor in subtle nuances such as fine wrinkles and micro-expressions, reducing the need of animator handcrafting to refine animations.

Starting with developing a pipeline for 4D Capture of an performer’s range of motion (or Dynamic Shape Space); we apply this information to various components of the animation pipeline including rigging, blendshape solving to performance capture and keyframe animation. We also investigate how by acquiring a Dynamic Shape Space of multiple individuals we can develop a motion manifold for the personalisation of individual expression, that can be used as a prior for subject-agnostic animation. Finally we validate the need of non-linear animation through comparison to linear methods and through audience perception studies.

MSc Digital Entertainment - masters project:

Using convolutional neural networks (CNNs) to predict occluded facial expressions when wearing head - mounted displays (HMDs) for VR.

Background: Computer Science

BSc (Hons) Computer Science with Industrial Placement Year, University of Bath.

Download Kyle's Research Profile

View Kyle's Research Outputs


2016
Azeem Khan
Azeem Khan
Supervisor:
Dr Tom Fincham Haines
Industrial Supervisor:
Michele Condò

Procedural gameplay flow using constraints

Industrial Partner:

Ubisoft Reflections

Research Project: Procedural gameplay flow using constraints

This project involves using machine learning to identify what players find exciting or entertaining as they progress through a level.  This will be used to procedurally generate an unlimited number of levels, tailored to a user's playing style.

Tom Clancy's The Division is one of the most successful game launches in history, and the Reflections studio was a key collaborator on the project. Reflections also delivered the Underground DLC, within a very tight development window. The key to this success was the creation of a procedural level design tool, which took a high level script that outlined key aspects of a mission template, and generated multiple different underground dungeons that satisfied this gameplay template. The key difference to typical procedural environment generation technologies, is that the play environment is created to satisfy the needs of gameplay, rather than trying to fit gameplay into a procedurally generated world.

The system using for TCTD had many constraints, and our goal is to develop technology that will build on this concept to generate an unlimited number of missions and levels procedurally, and in an engine agnostic manner to be used for any number of games. We would like to investigate using Markov constraints, inspired by the 'flow machines' research currently being undertaken by Sony to generate music, text and more automatically in a style dictated by the training material. http://www.flow-machines.com/ (other techniques may be considered)

Masters Project:

An Experimental Approach to the Complexity of Solving Bimatrix Games

Background: Physics

MSci Physics with Theoretical Physics, Imperial College

Download Azeem's Research Profile


2016
Catherine Taylor
Catherine Taylor
Supervisor:
Prof Darren Cosker, Dr Neill Campbell
Industrial Supervisor:
Eleanor Whitley, Robin McNicholas

Deformable objects for virtual environments

Industrial Partner:

Marshmallow Laser Feast

Research Project: Deformable objects for virtual environments 

There are currently no solutions at market that can rapidly generate a virtual reality 'prop' from a generic object, and then render it into an interactive virtual environment, outside of a studio. A portable solution such as this would enable creation of deployable immersive experiences where users could interact with virtual representations of physical objects in real time, opening up new possibilities for applications of virtual reality technologies in entertainment, but also in sports, health and engineering sectors.

This project combines novel alogrithmic software for tracking deformable objects, interactive stereoscopic graphics for virtual reality, and an innovative configuration of existing hardware, to create the Marshmallow Laser Feast (MLF) DOVE system. The project objective is to create turn-key tools for repeatably developing unique immersive experiences and training environments. The DOVE system will enable MLF to create mixed reality experiences such as live productions, serialised apps & VR products/experiences to underpin signiticant business growth and new job creation opportunities.

Background: Maths

BSc Mathematics, Edinburgh University; Dissertation on Cosmological Models

 

 

Download Catherine's Research Profile

View Catherine's Research Outputs


2016
Lewis Ball
Lewis Ball
Supervisor:
Prof Lihua You, Prof Jian Jun Zhang
Industrial Supervisor:
Dr Mark Leadbeater, Dr Chris Jenner

Material based vehicle deformation and fracturing

Industrial Partner:

Ubisoft Reflections

Research Project: Material based vehicle deformation and fracturing

Damage and deformation of vehicles in video games is essential for delivering an exciting and immersive experience to the player, however there are tough constraints placed on deformation methods used in video games. They must produce deformations which appear plausible so as not to break the players immersion, however they must also be robust enough to remain stable in any situation the player may experience. Lastly any deformation method must be fast enough to calculate the deformations in real-time while also leaving enough time for other critical game state updates such as Rendering, AI and Animations. 

My research focuses on augmenting real-time physics simulations with data-driven methods. Data from offline high-quality, physically-based simulations are used to augment real-time simulations in order to allow them to adhere to physically correct material properties while also remaining fast and stable enough to use in production-quality video games. 

Background:

BSc Physics and MSc Scientific Computing, University of Warwick. 

 

 

 

 

Download Lewis' Research Profile


2016
Padraig Boulton (Paddy)
Padraig Boulton (Paddy)
Supervisor:
Prof Peter Hall
Industrial Supervisor:
Oliver Schilke

Recognition of Specific Objects Regardless of Depiction

Industrial Partner:

Disney Research

Research Project: Recognition of Specific Objects Regardless of Depiction

Recognition numbers among the most important of all open problems in Computer Vision. State of the art using neural networks is achieving truly remarkable performance when given real world images (photographs). However, with one exception, the performance of each and every mechanism for recognition falls significantly when the computer attempts to recognise objects depicted in non-photorealistic form. This project addresses that very important literature gap by developing mechanisms able to recognise specific objects regardless of the manner on which they are depicted. It builds on state of the path which is alone in generalising uniformly across many depictions.

In this case, the objects of interest are specific objects rather than visual object classes, and more particularly the objects represent visual IP as defined by the Disney corporation. Thus an object could be “Mickey Mouse”, and the task would be to detect “Mickey Mouse” photographed as a 3D model, as a human wearing a costume, as a drawing on paper, as printed on a T-shirt and so on.

Currently we are investigating how different art styles map salient information of object classes or characters, and using this to develop a recognition framework that can use examples from artistic styles to learn domain agnostic classifier capable of generalising to unseen depictive styles.

MSc Digital Entertainment - Masters project:  

Undoing Instagram Filters : Creating a generative adversarial network (GAN) which takes a filtered Instagram photo and synthesizes an approximation of the original photo.

Background: Automotive Engineering

MEng Automotive Engineering, Loughborough university. 

View Paddy's Research Outputs


2015
Simone Barbieri
Simone Barbieri
Supervisor:
Xiaosong Yang, Zhidong Xiao
Industrial Supervisor:
Ben Cawthorne, Thud Media

3D Content Creation Exploiting 2D Character Animation

Industrial Partner: Bait Studio

Research Project: 3D Content Creation Exploiting 2D Character Animation

While 3D animation is constantly increasing its popularity, 2D is still largely in use in animation production. In fact, 2D has two main advantages. The first one is economic as it is more rapid to produce, having one less dimension to consider. The second one is important for the artists as 2D characters usually have highly distinctive traits, which are lost in a 3D transposition. An iconic example is Mickey Mouse, whom ears in 2D appear circular no matter which way he is facing.

This research project investigates the automatic generation of 3D content by using existing 2D character animations. To maintain the 2D advantages in 3D, we propose a three-step approach: the generation of a 3D model for each perspective of each body part of the character; a registration method for each pair of models from adjacent perspective; the generation of the 3D animation from the 2D one.

Find out more about Simone here.

 

 

Download Simone's Research Profile

View Simone's Research Outputs


http://barbierisimone.com


2015
Ifigeneia Mavridou
Ifigeneia Mavridou
Supervisor:
Dr. Emili Ballaguer-Balester, Dr Ellen Seis, Dr Alain Renaud
Industrial Supervisor:
Dr Charles Nduka

Emotion and engagement analysis of virtual reality experiences

Industrial Partner:

Emteq

Our emotions are at the core of human existence, yet there are many questions to be answered about how our emotions affect what we feel, think and do. Virtual reality (VR) represents an ideal technology for studying human behaviour, and for people to experience things that would be otherwise impossible. As the visual and audio stimuli and level of realism are under complete creative control, most aspects of the user experience can be precisely measured. Understanding and measuring the emotional responses of an individual immersed in room-scale, free-walking, VR scenarios could provide the ultimate laboratory for behavioural sciences and user experience research.  

 

 

For this project, we are closely collaborating with Emteq Ltd., in order to develop a system for emotion detection in VR using physiological signals and behavioural data. This work has assisted in the further development of a novel wearable device called “EmteqVR” consisting of physiological sensors that can read the emotional responses of the wearer. Our emotion detection approach utilising machine learning processes, is based on the 2-dimensional model comprising of the dimensions of valence and arousal. The outcomes of this research will be used to provide baseline data to inform future VR research and the development of mental healthcare applications.

 

Download Ifigeneia's Research Profile

View Ifigeneia's Research Outputs


2015
Joanna Tarko
Joanna Tarko
Supervisor:
Dr Christian Richardt
Industrial Supervisor:
Tim Jarvis

Graphics Insertions into Real Video for Market Research

Industrial Partner

CheckmateVR

Research Project: Graphics Insertions into Real Video for Market Research

Often, when asked, people can't explain why they bought a specific product. The aim of market research is to design research methods that help to explain why the decision was made. In a perfect scenario, study participants would be placed in a real, but fully-controlled shopping environment; however, in practice, such environments are very expensive or even impossible to build. Virtual reality (VR) environments, in turn, are fully controllable and immersive, but they lack realism.

My project is on combining video camera footage with computer-generated elements to create sufficiently realistic (or plausible) but still controlled environments for market research. The computer graphics elements can range from texture replacement (as on the screen of a mobile phone) through to complete three-dimensional models of buildings (such as a petrol station). More commonly, billboards, posters and individual product items comprise the graphics models. After working with standard cameras, I focused on 360° cameras (cameras that capture everything around them in every direction), which are rapidly gaining in popularity, and may provide a good replacement for VR content in terms of immersion.​

MSc Digital Entertainment - Masters project:

Matchmoving on set with the use of real-time visual-intertial localization and depth estimation, working with Dr Neill Campbell

Background: Applied Physics/ Computer Graphics

Download Joanna's Research Profile

View Joanna's Research Outputs


2015
Lazaros Michailidis
Lazaros Michailidis
Supervisor:
Dr. Emili Ballaguer-Balester
Industrial Supervisor:
Jesus Lucas Barcias

Neurogaming

Industrial Partner:

Sony Interactive Entertainment Europe

Research Project: Uncovering the physiological correlates of flow experience in virtual reality games

The purpose of this study is to investigate the physiological underpinnings of flow during virtual reality game playing. Flow is considered a highly desirable mental state, with links to creativity, increased performance and well-being. It constitutes the core experience of task engagement and is particularly relevant in video games.

Detecting and predicting flow in either real-time or offline settings can facilitate our understanding of design and usability parameters that will allow for an engaging and enjoyable experience in digital media use. By extension, adaptation of the video game at hand, based on the user's physiology, can help create compelling experiences that will maintain the user's concentration, motivation and replay intention. These are factors highly valued by game designers, as they can identify areas wherein the game has failed to stimulate an immersive experience.

Our research commenced with heart rate variability and electrooculography and extended to electroencephalography, all of which have been employed in a custom game tailored for virtual reality. Through this project, we aim to help experts create better and more engaging digital games that will benefit both sides: the player base and the industry.

Download Lazaros' Research Profile

View Lazaros' Research Outputs


2015
Thomas Joseph Matthews
Thomas Joseph Matthews
Supervisor:
Dr Feng Tian / Prof Wen Tang
Industrial Supervisor:
Tom Dolby

Semi-Automated Proficiency Analysis and Feedback for VR Training

Industrial Partner:

AI Solve

Research Project: Semi-Automated Proficiency Analysis and Feedback for VR Training

Virtual Reality (VR) is a growing and powerful medium that is finding traction in a variety of praxis. My research aims to tackle the specific aim of encouraging immersive learning and knowledge retention through short-form

Our project is streamlining the process of proficiency analysis in virtual reality training by using performance recording and data analytics to directly compare subject matter experts and trainees. Currently virtual reality training curriculums require at least post-performance review and/or direct supervised interpretation to provide feedback, whereas our system will be able to use expert performance models to direct feedback towards trainees’ strengths and weaknesses, both in a specific scenario and across the subject curriculum.

Using an existing virtual reality training scenario developed by AiSolve and Children’s Hospital Los Angeles, subject matter experts will complete multiple performance variations in a single scenario. This provides a scenario action graph which is then used as a baseline to measure trainee performances against, noting significant variants in attributes like decision-making, stimuli perception and stress management. We will validate the system using objective and subjective accuracy metrics, implementation feasibility and usability measures.

More information on the VRSims framework this project is attached to can be found on the AiSolve website: http://www.aisolve.com/enterprise/

 

 

 

Download Thomas' Research Profile

View Thomas' Research Outputs


http://www.aisolve.com/


2015
Javier Dehesa
Javier Dehesa
Supervisor:
Julian Padget
Industrial Supervisor:
Andrew Vidler

Modelling human--character interaction in virtual reality

Industrial Partner

Ninja Theory

Research Project: Modelling human--character interaction in virtual reality

Interaction design in virtual reality is difficult because of the typical nature of the input (tracked head and hands position) and the freedom of action of the user. In the case of interaction with virtual (human-like) characters, generating plausible reactions under every circumstance generally requires intensive animation work and complex hand-crafted logic, sometimes also imposing limitations on the world design. We address the problem of real-time human–​character interaction in virtual reality proposing a general framework that interprets the intentions of the user in order to guide the animation of the character.

Using a vocabulary of gestures, the framework analyses the head and hands 3D input provided by the tracking hardware and decides the actions that the character should take, which are then used to synthesise an appropriate animation for the scene. We propose a novel combination of data-driven models that perform the tasks of gesture recognition and character animation, guided by simple logic describing the interaction scenarios.

We consider the problem of sword fighting in virtual reality as our case study and identify other potential applications and extensions. We expect our research to establish an academically and industrially valuable interaction framework, while also providing novel insights in real-time applications of machine learning to virtual reality environments.

Background: Mathematics/ Computer Science

Masters Degree Mathematics and Compuer Science, Universidad de Cantabria

Download Javier's Research Profile

View Javier's Research Outputs


2013
Adam Boulton
Adam Boulton
Supervisor:
Dr Rachid Hourizi, Prof Eamonn O'Neill
Industrial Supervisor:
Alice Guy

The Interruption and Abandonment of Video Games

Industrial Partner:

PaperSeven

Research Project

The cost of video game development is rapidly increasing as the technological demands of producing high quality games grow ever larger. With budgets set to spiral into the hundreds of millions of dollars, and audience sizes rapidly expanding as gaming reaches new platforms, we investigate the phenomenon of task abandonment in games. Even the most critically acclaimed titles are rarely completed by even half their audience. With the cost of development so high, it is more important than ever that developers, as well as the players, get value for money. We ask why so few people are finishing their games, and investigate whether anything can be done to improve these numbers.

Background: Computer Science

BSc Computer Science, University of Cardiff


2013
Tristan Smith
Tristan Smith
Supervisor:
Dr Julian Padget
Industrial Supervisor:
Andrew Vidler

Procedural content generation for computer games

Industrial Partner

Ninja Theory

Research Project

Procedural content generation for computer games

Procedural content generation (PCG) is increasingly used in games to produce varied and interesting content. However PCG systems are becoming increasingly complex and tailored to specific game environments, making them difficult to reuse, and so we investigate ways to make the PCG code reusable and allow simpler, usable descriptions of the desired output. By allowing the behaviour of the generator to be specified without altering the code, we provide increasingly data-driven, modular generation. We look at reusing tools and techniques originally developed for the semantic web, and investigate the possibility of using them with industry-standard games development tools.

Background: Computer Science

Master of Engineering (MEng), Computer Science with Artificial Intelligence, University of Southampton

 

View Tristan's Research Outputs


2013
Zack Lyons
Zack Lyons
Supervisor:
Dr Leon Watts
Industrial Supervisor:
Prof Nigel Harris

Virtual Therapy for Aqcuired Brain Injury Rehabilitation

Industrial Partner:

Designability / Brain Injury Rehabilition Trust

Research Project:

Virtual Therapy - A Story-Driven and Interactive Virtual Environment for Acquired Brain Injury Rehabilitation

An estimated 350,000 people are affected in the UK each year by an acquired brain injury (ABI). When such injuries affect frontal lobe areas, a person can start to exhibit challenging behaviours that preclude community integration. Even seemingly basic everyday tasks, such as buying a bus ticket or searching for a shop, can be profoundly difficult and highlight significant behavioural obstacles to overcome. A crucial concern for clinicians is therefore to assess such obstacles by witnessing how well people with an ABI manage apparently routine tasks.

Our research has generated an immersive and interactive virtual environment that places people with ABIs into a realistic community setting. The environment challenges their ability to organise tasks, think creatively about solutions, and seek answers through social interactions. By delivering tasks that mirror the demands of the real world, clinicians may be able to better predict how people will behave in the community and train them to overcome these difficulties.

 

Download Zack's Research Profile

View Zack's Research Outputs


2013
Rahul Dey
Rahul Dey
Supervisor:
Dr Christos Gatzidis
Industrial Supervisor:
Jason Doig

New Games Technologies

My research focuses on using real time voxelization algorithms and procedurally creating content in voxel spaces. Creating content using voxels is more intuitive than polygon modelling and possesses a number of other advantages. This research intends to provide novel methods for real time voxelization and subsequently editing them using procedural generation techniques. These methods will also be adapted for next generation consoles and take advantage of the features that they expose.

 

View Rahul's Research Outputs




© Centre for Digital Entertainment 2021. Site by MediaClash.