Current research engineers and live projects

We currently have nearly 50 live projects with 40 companies .

Our research engineers are involved in everything from procedural generation of content for international games companies to assistive technologies to help stroke rehabilitation via the future of interactive technologies for major broadcasters and virtual reality for naval training.

Get to know some of our current research engineers below and see our video filmed during our CDE Winter Networking Event at the British Film Institute, London.  Hear more from our students and alumni.

Research Engineers looking for placements:

Victor Ceballos Inza - geometry processing and deformable objects

Kat Wojna - human-robot task sharing

Olivia Ruston - designing wearable technologies

A new cohort of students will join the CDE Engineering Doctorate (EngD) in Digital Entertainment in September 2019, get in touch with our project co-ordinators if you're a company or organisation interested in collaborating with us on research in digital entertainment and creative technologies through the EngD programme.
 


2018
Sydney Day
Sydney Day
Supervisor:
Lihua You
Industrial Supervisor:
Jason Fairley

Humanoid Character Creation Through Retargeting

Industrial Partner:

Axis Animation

Research Project: Humanoid Character Creation Through Retargeting

To further research into automatic creation of rigs for humanoid characters with associated animation cycles and poses. Through retargeting a number of techniques can be covered; automatic generation of facial blendshapes from a central reference library, retargeting of bipedal humanoid skeletons, and transfer of weights between characters of differing topologies. The key goals here are to dramatically reduce the amount of time needed to rig certain types of character, thus freeing up the riggers to work on fancier, more complex rigs that cannot be automated. 

Background: Computer Science

BA (Hons) Computer Animation and Visualisation

Download Sydney's Research Profile


2018
Jack Brett
Jack Brett
Supervisor:
Dr Christos Gatzidis
Industrial Supervisor:
Dr Ning Xu

Augmented Music Interaction and Gamification

Industrial Partner:

Roli

Research Project: Augmented Music Interaction and Gamification

I am currently exploring barriers of entry to creating music as well as assessing how one learns an instrument at beginner level.

Background: Games Technology

BSc (Hons) Games Technology. Previous research work was conducted mostly with the Psychology Department where programs were created for mobile/PC use and then later branched into virtual reality.  Most recently, I have been focusing on a VR program which is used in clinical trials to gauge the severity of certain mental illnesses such as dementia.


http://jackbrett.co/


2018
Neerav Nagda
Neerav Nagda
Supervisor:
Dr Xiaosong Yang, Dr Jian Chang, Dr Richard Southern
Industrial Supervisor:
James Coore

Asset Retrieval System

Industrial Partner:

Absolute Post

Research Project: Asset Retrieval System

Often media assets are recreated during the course of multiple projects, resulting in duplication of work. Sometimes, data from past projects are retrieved to reduce duplication, but selecting the relevant data creates a challenge. This occurs as file names can be an ambiguous representation the content being stored.

This project aims to develop a semantic understanding of the contents of media files, by adding tags and metadata to media data, to facilitate easier searching for assets for reuse or revaluation from previous projects.

Background: Computer Science

BA Computer Visualisation And Animation. I specialised in programming and scripting, developing tools and plugins for content creation applications. My Major project in my final year sparked research in machine learning and neural networks for motion synthesis.


2018
Aaron Demolder
Aaron Demolder
Supervisor:
Dr Hammadi Nait-Charif

Data capture and 3D integration for VFX and Emerging Technology

Research Interest: Data capture and 3D integration for VFX and Emerging Technology

I'm interested in better incorporating emerging technology into art driven pipelines, including everything from mobile sensors to lidar units.
Having learnt all the skills to become a 3D generalist and explored pretty much the entirety of the 3D pipeline, I want to improve and expand artist's experiences with content creation, be it in traditional VFX or by pushing the possibilities of real-time capture, performance, or display.

I'm currently exploring holographic/volumetric capture.

Background: Computer Science

BA (Hons) Computer Animation and Visualisation


https://aarondemolder.com


2018
Katarzyna Wojna
Katarzyna Wojna

Research Interest: Human Robot Collaboration

Are you interested in collaborating with academic researchers working on task sharing and collaborative work between humans and robots? Please contact Sarah Parry, Research Project Co-ordinator, s.c.parry@bath.ac.uk

Research Interest: Task sharing between humans and robots

I am interested in exploring the use of tactile (haptic) output to support more effective human robot communication and collaboration. I am currently working with Dr Michael Wright on a project on understanding the role the role of haptic feedback in collaboration between Humans and Robots.

Background: Computer Science, 3D/visual computing and HCI

Portfolio: https://www.artstation.com/artist/katty


2018
Graham Rigler
Graham Rigler
Supervisor:
Wen Tang; Dr Richard Southern
Industrial Supervisor:
Griffon Hoverwork

Griffon Hovercraft Simulator for Pilot Training

Industrial Partner

Griffon Hoverwork

Research Project: Griffon Hovercraft Simulator for Pilot Training

Background: Computer Science

BSc (Hons) Games Programming


2018
Olivia Ruston
Olivia Ruston

Designing interactive wearable technologies

Are you interested in sponsoring and supporting research on designing interactive wearable technologies? Please contact Sarah Parry, Research Project Co-ordinator, s.c.parry@bath.ac.uk

Research Interest: Designing Interactive wearable technologies

My research interests are around interactive wearable technologies, a branch of computing that lies at the crossroads between computer science, fashion, sport, theatre and more. Garments embedded with sensors /affectors  collect information and inform the wearer of their environment; individual garments may work in isolation or in conjunction with other similar garments which are connected via existing network infrastructures.  There is great potential for applications of wearable technology to become an integral part of everyday life and Human Computer Interaction offers a multidisciplinary approach to realising this potential. Research into this area has a purpose outside academia and can be used within industry.

Wearable technologies have the potential to integrate information that has meaning for people as they go about their everyday lives. They can reflect social and physical aspects of the environment within which the user must act. Research into wearable technologies has existed since the advent of modern computing, yet their numerous applications are not commercially, widely available.

Current status: EngD programme - Year 1, Human Computer Interaction route

Modules include Mobile & Pervasive Systems, Collaborative Systems and Interactive Communication Design.

Background: Computer Science

BSc Computer Science, University of Bath

Specialised in Human Computer Interaction; modules included Safety Critical Computer Systems and Designing Interactive Computer Systems. My final year project entitled “An Investigation of User Interactions with Wearable Ambient Awareness Technologies” was graded at a First with 85%.

2016 BCS Lovelace Colloquium Winner

2015 & 2017 BCS Lovelace Colloquium Finalist


2018
Farbod Shakouri
Farbod Shakouri

Connected Tangible Objects for Immersive Augmented Reality

Industrial Partner:

PlayFusion

I’m investigating methods of interaction with tangible objects in immersive Augmented Reality (AR) narratives. AR has become a prevalent technology in the games industry; a medium that submerges interactive virtual information with our physical environment. Various systems have produced functionalities that explore methods for interacting with AR and narratives. However, little research has been carried out to tackle the challenges of enabling Internet of Tangible Things (IoTT) (Angelini, et al. 2017) to be aware of real-time virtual entities in the context of immersive narratives.

Background: Games Technology

BSc (Hons) Games Technology, Bournemouth University - with one year placement at my startup company.

Research Assistant for Corpus Quod:

I explored Augmented Reality (AR) and immersive technologies for interactive narratives, developing new approaches for immersive performance experience and ways of documenting and assessing its consumption. By designing and implementing an AR immersive experience prototype that was intended to capture the attitudes of audience-participants towards refugees and asylum seekers (RAS) and the UK-based asylum process.

Research Project: Avebury Portal.

My research project focused on using location-based Augmented Reality application to enhance users’ experience at an archaeological heritage site - by creating a mystery treasure hunt, enabling users to use the environment for clues in order to solve the underlying puzzles.

Research Gate  


2018
Robert Kosk
Robert Kosk
Supervisor:
Dr Richard Southern
Industrial Supervisor:
Willem Kokke

Biomechanical Parametric Faces Modelling and Animation

Industrial Partner:

Humain

Research Project: Biomechanical parametric faces modelling and animation

Project Overview

Modelling and animation of high-quality, digital faces remains a tedious and challenging process. Although sophisticated data-capture and manual processing allow realistic results in offline production, there is demand in the rapidly developing virtual reality industry for fully automated and flexible methods.

My project aims to develop a parametric template for physically based facial modelling and animation, which will:

  • automatically generate any face, either existing or synthetic,
  • intuitively edit structure of a face without affecting the quality of animation,
  • reflect non-linear nature of facial movement,
  • retarget facial performance, accounting for anatomy of particular faces.

Ability to generate faces with governing, meaningful parameters such as age, gender or ethnicity is a crucial objective in wider adaptation of the system among the artists. Furthermore, the template can be extended with numerous novel applications, such as animation retargeting driven by muscle activations, fantasy character synthesis or digital forensic reconstruction.

Background: Computer Science

BA (Hons) Computer Visualisation and Animation


www.robertkosk.com


2018
Karolina Pakenaite
Karolina Pakenaite

Research Interest: Artificial Intelligence in Creative Industry

Background: Maths/ Computer Science

MSci Mathematics with a Year in Computer Science, Birmingham


2017
Kenneth Cynric Dasalla
Kenneth Cynric Dasalla
Supervisor:
Dr Christian Richardt, Dr Christof Lutteroth
Industrial Supervisor:
Jack Norris, Chris Price

Mixed Reality Broadcast Solutions

Industrial Partner:

ZubrVR

Research Project: Using realtime depth-sensing cameras and positional tracking technologies in video for Mixed Reality Broadcast Solutions

The project aims to investigate the use of depth-sensing camera and positional tracking technologies to dynamically composite different visual content in real time for mixed-reality broadcasting applications. This could involve replacing green-screen backgrounds with dynamic virtual environments, or augmenting 3D models into a real-world video scene.

A key goal of the project is to keep production costs as low as possible. The technical research will therefore be undertaken predominantly with off-the-shelf consumer hardware to ensure accessibiity.At the same time, the developed techniques also need to be integrated with existing media production techniques, equipment and approaches, including user interfaces, studio environments and content creation.

Follow the project Blog

MSc Digital Entertainment - Masters Project:

Multi-View High-Dynamic-Range Video, working with Dr Christian Richardt

Background: Computer Science

BSc in Computer Science, Cardiff University specializing in Visual Computing. Research project on Boosting Saliency Research on the development of a new dataset which includes multiple categorised stimuli and distortions. Fixations of multiple observers on the stimuli were recorded using an eye tracker.

Download Kenneth's Research Profile


https://zubr.co/author/kenneth/


2017
Valentin Miu
Valentin Miu
Supervisor:
Dr Oleg Fryazinov
Industrial Supervisor:
Mark Gerhard

Realtime Scene Understanding with Machine Learning

Industrial Partner:

Playfusion Ltd

Research Project:

Realtime Scene Understanding with Machine Learning on Low-Powered Devices

Given the speed requirements of realtime applications, server-side deep learning inference is often not suitable due to high latency, potentially even in a 5G world. With the increased computing power of smartphone processors, the leveraging of device GPUs, and the development of mobile-optimized neural networks such as Mobilenet, realtime on-device inferencing has become possible.

Within this scope, machine learning techniques for scene understanding are leveraged, such as generic object detection. They are implemented as multiplatform augmented reality apps, offering a unified experience by using Unity and C++ plugins, the machine learning functionality being accomplished through the TensorFlow Lite C API.  In the current project, machine learning and other methods are combined to track the position and pose of a hair curler, with the purpose of developing an app to educate regular users in the usage of professional hairdressing equipment.

Background: Physics

MSci Physics, University of Glasgow, graduating with a 1st degree. During this time I familiarized myself with compositing and 2D/3D animation, in a non-professional setting. In my first year at the CDE, I successfully completed masters-level courses in Maya, OpenGL and Houdini, and have been learning CUDA GPU programming and machine learning. 

 

 

 

 

 

 

 

 

Download Valentin's Research Profile


http://miu-v.com/


2017
Michelle Wu
Michelle Wu
Supervisor:
Dr Zhidong Xiao

Research Interest: Motion synthesis with Neural Networks

I am interested in Motion and Performance Capture and how Machine Learning algorithms can be applied to motion data for application in the VFX and game industries. My current research project is focused on the design of a framework for character animation synthesis from content-based motion retrieval. The project's aim is to reuse collections of human motion data, exploiting unsupervised learning for training an effective motion retireval method. It will provide animators with more control over the generation of high quality animations, using Neural Networks for motion synthesis purposes.

Background:  Computer Animation, Games and Effects

BSc Software Development for Animation, Games and Effects, Bournemouth University.

Research Assistant in Human Computer Interaction/Computer Graphics in collaboration with the Modelling Animation Games, Effects (MAGE) group within the National Centre for Computer Animation (NCCA), focusing on the development and dissemination of the SHIVA Project, a software that provides virtual sculpting tools for people with a wide range of disabilities.


2017
Marcia Saul
Marcia Saul
Supervisor:
Dr Fred Charles, Dr Xun He
Industrial Supervisor:
Stuart Black

Industrial Partner:

BrainTrainUK

Research Project: A Two-Person Neuroscience Approach for Social Anxiety: Prospects into Bridging Intra- & Inter-brain Synchrony with Neurofeedback

My main field of interest is computational neuroscience, brain-computer interfaces and machine learning with the use of games in applications for rehabilitation and improving the quality of life for patients/persons in care.

Social anxiety has become one of the most prominent of anxiety disorders, with many of its symptoms overlapping into the realms of other mental disorders such as depression, autism spectrum disorder, schizophrenia, ADHD, etc. Neurofeedback (NF) is well known to modulate these symptoms using a metacognitive approach of relaying a participant’s brain activity back to them for self-regulation of the target brainwave patterns. In this project, we explore the potential of integrating Intra- and inter-brain Synchrony to explore the potential of a more effective NF procedure. By using realistic multimodal feedback in the delivery of NF, we can amplify the concept of collaboration or co-operation during tasks – utilising the ‘power of two’ in two-person neuroscience – to help reach our goal of synchronising brainwaves between two participants and aiming to alleviate symptoms of social anxiety.

MRes - Masters project:

Using computational proprioception models and artificial neural networks in predictive two-dimensional wrist position methods.

Background: Psychology and Computational Neuroscience

BSc in Biology with Psychology, Royal Holloway University of London

MSc in Computational Neuroscience & Cognitive Robotics, University of Birmingham

Download Marcia's Research Profile


2017
Rory Clark
Rory Clark
Supervisor:
Dr Feng Tian

3D UIs within VR and AR with Ultrahaptics Technology

Industrial Partner:

Ultrahaptics

Research Project: 3D User Interfaces for Virtual and Augmented Reality

Research into how a 3D user interface (UI) can be presented, perceived, and realised within virtual and augmented reality (VR and AR), while integrating Ultrahaptics mid-air haptics technology. Mid-air haptics presents the opportunity of allowing users to feel feedback and information directly on their hands, without having to hold a specific controller. This means the hands can be targeted for both tracking, and haptics, while still allowing full freedom of control.

Background: Games Programming

BSc Games Programming, Bournemouth University, focusing on the use and development of; games and game engines, graphical rendering, 3D modelling, and a number of programming languages. Final year dissertation on virtual reality event planning simulation, utilising the HTC Vive. Previous projects on systems ranging from the web and mobile, to smart-wear devices and VR headsets.

Download Rory's Research Profile


https://rory.games


2017
Sameh Hussain
Sameh Hussain
Supervisor:
Prof Peter Hall
Industrial Supervisor:
Andrew Vidler

Learning to render in style

Industrial Partner:

Ninja Theory

Research Project:

Procedural generation: Investigations into real-time applications of style transfer incorporating inference of contextual details to produce stylistic and/or artistic post-processing effects.

Style transfer techniques have provided the means of re-envisioning images in the style of various works of art. However, these techniques can only produce credible results for a limited range of images. As there is no consideration for the contextual details within the image, current style transfer techniques do not produce temporally coherent results. For example, the application of a painterly effect requires the consideration of the objects within an image, the effect needs to be applied in such a way that these objects are still recognisable. We are researching the development of style transfer techniques that will take into account the contextual details with the ultimate aim of creating post-processing effects that can be used in the digital entertainment industry.

MSc Digital Entertainment - Masters project: 

A parametric model for linear flames, working with Prof Peter Hall

Background: Mechanical Engineering

MEng in Mechanical Engineering, University of Bath; one year placement with Airbus Space and Defence developing software to monitor and assess manufacturing performance.


2017
Thomas Williams
Thomas Williams
Supervisor:
Dr Elies Dekoninck, Dr Simon Jones, Dr Christof Lutteroth
Industrial Supervisor:
Prof Nigel Harris, Dr Hazel Boyd

AR as a cognitive prosthesis for people living with dementia

Industrial Partner:

Designability

Research Project: AR as a cognitive prosthesis for people living with dementia

There have been considerable advances in the technology and range of applications of virtual and augmented reality environments. However, to date, there has been limited work examining design principles that would support successful adoption (Gandy 2017). Assistive technologies have been identified as a potential solution for the provision of elderly care. Such technologies have in general the capacity to enhance the quality of life and increase the level of independence among their users. 

The aim of this research project is to explore how augmented reality (AR) could be used to support those with dementia with daily living tasks and activities. This will specifically focus on those living with mild to moderate dementia and their carers. Designability have been working on task sequencing for different types of daily living tasks and have amassed considerable expertise in how to prompt people with cognitive difficulties, through a range of everyday multi-step tasks (Boyd 2015). This project would allow us to explore how AR technology could build on that expertise.

The research will involve developing new applications for use with augmented reality technology such as the Microsoft HoloLens, Samsung AR or Meta 2. These augmented reality technologies are all still in their early stages of technology maturity, however they are at the ideal stage of development to explore their application in such a unique field as assistive technology.

MSc Digital Entertainment - Masters project:

A novel gaze tracking system to improve user experience at Cultural Heritage sites, with Dr Christof Lutteroth

Background: Maths/Physics

University of Bath BSc (Hons) Mathematics and Physics Four years with placement

Download Thomas' Research Profile


http://blogs.bath.ac.uk/ar-for-dementia/


2017
Victor Ceballos Inza
Victor Ceballos Inza

Geometry processing with deformable objects

Are you a company with interesting problems or challenges around geometry processing and deformable objects? Contact Sarah Parry,

Research Project Co-ordinator, s.c.parry@bath.ac.uk

Research Interests: Geometry processing and deformable objects

My research areas of interest are geometry processing and modelling. I am particularly interested in deformable objects, and how traditional mesh processing methods can be improved with the help of novel leaning techniques. Other interests include computer animation and visual effects.

Masters Project: Differentiable Procedural Modelling

Procedural modeling has the potential to generate a large variety of detailed content,  with applications for film/movies, games, and simulations. But artists can sometimes find using procedural modelling systems not intuitive, as these applications rely on the technical skills of the user. Differentiable Procedural Modelling attempts to bridge this gap by allowing a more direct and interactive manipulation of the models.

It is based on efficient differentiation and optimisation. We show that such a system can be built to run efficiently in real time. This project builds on previous work carried out at UCL. We seek to improve the existing application in terms of efficiency, as well as to add new functionality, including the support of novel procedural rules and high-order differentiation. Working with Dr Yongliang Yang.

Background: Computer Graphics, ML and Maths

BSc in AI & Maths, University of Edinburgh, MSc in Computer Graphics, Vision & Imaging, UCL, Universitat Politècnica de Catalunya, Barcelona, developing a low-cost system for setting up fitted self-avatars in VR.

Research Assistant, Toshiba Healthcare, Edinburgh working on the application of Computer Vision techniques to healthcare, in for example to the detection of falls in the elderly; Universitat Politècnica de Catalunya, Barcelona  research on the analysis of colonic content for diagnosis


2017
Alexandros Rotsidis
Alexandros Rotsidis
Supervisor:
Prof Peter Hall; Dr Christof Lutteroth
Industrial Supervisor:
Mark Lawson

Creating an intelligent animated avatar system

Industrial Partner:

Design Central (Bath) Ltd t/a DC Activ / LEGO

Research Project:

Creating an intelligent avatar: using Augmented Reality to bring 3D models to life. The creation of 3D intelligent multi-lingual avatar system that can realistically imitate (and interact with) Shoppers (Adults), Consumers (Children), Staff (Retail) and Customers (Commercial) as users or avatars. Using different dialogue, appearance and actions based on given initial data and feedback on the environment and context in which it is placed creating ‘live’ interactivity with other avatars and users.

While store assistant avatars and virtual assistants are commonplace in present times, they act in an often scripted and unrealistic manner. These avatars are also often limited in their visual representation (ie usually humanoid).

This project is an exciting opportunity to apply technology and visual design to many different 3D objects to bring them to life to guide and help people (both individually and in groups) learn from their mistakes in a safe virtual space and make better quality decisions increasing commercial impact.

Masters Project: AR in Human Robotics

Augmented Reality used in Human Robotics Interaction, working with Ken Cameron

Background: Computer Science

BSc (Hons) Computer Science from Southampton University; worked in the industry for 5 years as a web developer. A strong interest in Computer Graphics and Machine Learning led me to the EngD programme.

Download Alex's Research Profile


http://www.alexandrosrotsidis.com/


2016
John Raymond Hill
John Raymond Hill
Supervisor:
Dr Vedad Hulusic
Industrial Supervisor:
Holovis

Holovis Flight Deck Officer VR Simulation System

Industrial Partner:

Holovis

Research Project: Holovis Flight Deck Officer VR Simulation System

I've always been excited by technologies which let us exceed our biological limitations and Virtual Reality offers endless possibility to achieve this. My research interests are in bringing down the barriers for communication between our senses and virtual environments to increase what we're able to experience and accomplish in them.

Background: Computer Science

BSc in Computer Science and a few years out of academia.


2016
Kyle Reed
Kyle Reed
Supervisor:
Prof Darren Cosker
Industrial Supervisor:
Dr Steve Caulkin

Improving Facial Performance Animation using Non-Linear Motion

Industrial Partner:

Cubic Motion

Research Project: Improving Facial Performance Animation using Non-Linear Motion

Cubic Motion is a facial tracking and animation studio, most famous for their real-time live performance capture. The aim of this research is to improve the quality of facial motion capture and animation through the development of new methods for capture and animation.

We are investigating the utilisation of non-linear facial motion observed from 4D facial capture for improving the realism and robustness of facial performance capture and animation. As the traditional pipeline relies on linear approximations for facial dynamics, we hypothesise that using observed non-linear dynamics will automatically factor in subtle nuances such as fine wrinkles and micro-expressions, reducing the need of animator handcrafting to refine animations.

Starting with developing a pipeline for 4D Capture of an performer’s range of motion (or Dynamic Shape Space); we apply this information to various components of the animation pipeline including rigging, blendshape solving to performance capture and keyframe animation. We also investigate how by acquiring a Dynamic Shape Space of multiple individuals we can develop a motion manifold for the personalisation of individual expression, that can be used as a prior for subject-agnostic animation. Finally we validate the need of non-linear animation through comparison to linear methods and through audience perception studies.

MSc Digital Entertainment - masters project:

Using convolutional neural networks (CNNs) to predict occluded facial expressions when wearing head - mounted displays (HMDs) for VR.

Background: Computer Science

BSc (Hons) Computer Science with Industrial Placement Year, University of Bath.

Download Kyle's Research Profile


2016
Padraig Boulton (Paddy)
Padraig Boulton (Paddy)
Supervisor:
Prof Peter Hall
Industrial Supervisor:
Alex Jolly

Recognition of Specific Objects Regardless of Depiction

Industrial Partner:

Disney Research

Research Project: Recognition of Specific Objects Regardless of Depiction

Recognition numbers among the most important of all open problems in Computer Vision. State of the art using neural networks is achieving truly remarkable performance when given real world images (photographs). However, with one exception, the performance of each and every mechanism for recognition falls significantly when the computer attempts to recognise objects depicted in non-photorealistic form. This project addresses that very important literature gap by developing mechanisms able to recognise specific objects regardless of the manner on which they are depicted. It builds on state of the path which is alone in generalising uniformly across many depictions.

In this case, the objects of interest are specific objects rather than visual object classes, and more particularly the objects represent visual IP as defined by the Disney corporation. Thus an object could be “Mickey Mouse”, and the task would be to detect “Mickey Mouse” photographed as a 3D model, as a human wearing a costume, as a drawing on paper, as printed on a T-shirt and so on.

Currently we are investigating how different art styles map salient information of object classes or characters, and using this to develop a recognition framework that can use examples from artistic styles to learn domain agnostic classifier capable of generalising to unseen depictive styles.

MSc Digital Entertainment - Masters project:  

Undoing Instagram Filters : Creating a generative adversarial network (GAN) which takes a filtered Instagram photo and synthesizes an approximation of the original photo.

Background: Automotive Engineering

MEng Automotive Engineering, Loughborough university. 


2016
Lewis Ball
Lewis Ball
Supervisor:
Prof Lihua You, Prof Jian Jun Zhang
Industrial Supervisor:
Dr Mark Leadbeater, Dr Chris Jenner

Material based vehicle deformation and fracturing

Industrial Partner:

Ubisoft Reflections

Research Project: Material based vehicle deformation and fracturing

Damage and deformation of vehicles in video games is essential for delivering an exciting and immersive experience to the player, however there are tough constraints placed on deformation methods used in video games. They must produce deformations which appear plausible so as not to break the players immersion, however they must also be robust enough to remain stable in any situation the player may experience. Lastly any deformation method must be fast enough to calculate the deformations in real-time while also leaving enough time for other critical game state updates such as Rendering, AI and Animations. 

My research focuses on augmenting real-time physics simulations with data-driven methods. Data from offline high-quality, physically-based simulations are used to augment real-time simulations in order to allow them to adhere to physically correct material properties while also remaining fast and stable enough to use in production-quality video games. 

Background:

BSc Physics and MSc Scientific Computing, University of Warwick. 


2016
Azeem Khan
Azeem Khan
Supervisor:
Dr Tom Fincham Haines
Industrial Supervisor:
Michele Condò

Procedural gameplay flow using constraints

Industrial Partner:

Ubisoft Reflections

Research Project: Procedural gameplay flow using constraints

This project involves using machine learning to identify what players find exciting or entertaining as they progress through a level.  This will be used to procedurally generate an unlimited number of levels, tailored to a user's playing style.

Tom Clancy's The Division is one of the most successful game launches in history, and the Reflections studio was a key collaborator on the project. Reflections also delivered the Underground DLC, within a very tight development window. The key to this success was the creation of a procedural level design tool, which took a high level script that outlined key aspects of a mission template, and generated multiple different underground dungeons that satisfied this gameplay template. The key difference to typical procedural environment generation technologies, is that the play environment is created to satisfy the needs of gameplay, rather than trying to fit gameplay into a procedurally generated world.

The system using for TCTD had many constraints, and our goal is to develop technology that will build on this concept to generate an unlimited number of missions and levels procedurally, and in an engine agnostic manner to be used for any number of games. We would like to investigate using Markov constraints, inspired by the 'flow machines' research currently being undertaken by Sony to generate music, text and more automatically in a style dictated by the training material. http://www.flow-machines.com/ (other techniques may be considered)

Masters Project:

An Experimental Approach to the Complexity of Solving Bimatrix Games

Background: Physics

MSci Physics with Theoretical Physics, Imperial College

Download Azeem's Research Profile


2016
Catherine Taylor
Catherine Taylor
Supervisor:
Prof Darren Cosker, Dr Neill Campbell
Industrial Supervisor:
Eleanor Whitley

Deformable objects for virtual environments

Industrial Partner:

Marshmallow Laser Feast

Research Project: Deformable objects for virtual environments

There are currently no solutions at market that can rapidly generate a virtual reality 'prop' from a generic object, and then render it into an interactive virtual environment, outside of a studio. A portable solution such as this would enable creation of deployable immersive experiences where users could interact with virtual representations of physical objects in real time, opening up new possibilities for applications of virtual reality technologies in entertainment, but also in sports, health and engineering sectors.

This project combines novel alogrithmic software for tracking deformable objects, interactive stereoscopic graphics for virtual reality, and an innovative configuration of existing hardware, to create the Marshmallow Laser Feast (MLF) DOVE system. The project objective is to create turn-key tools for repeatably developing unique immersive experiences and training environments. The DOVE system will enable MLF to create mixed reality experiences such as live productions, serialised apps & VR products/experiences to underpin signiticant business growth and new job creation opportunities.

Background: Maths

BSc Mathematics, Edinburgh University; Dissertation on Cosmological Models

 

 

Download Catherine's Research Profile


2015
Simone Barbieri
Simone Barbieri
Supervisor:
Xiaosong Yang, Zhidong Xiao
Industrial Supervisor:
Ben Cawthorne, Thud Media

3D Content Creation Exploiting 2D Character Animation

Industrial Partner: Bait Studio

Research Project: 3D Content Creation Exploiting 2D Character Animation

While 3D animation is constantly increasing its popularity, 2D is still largely in use in animation production. In fact, 2D has two main advantages. The first one is economic as it is more rapid to produce, having one less dimension to consider. The second one is important for the artists as 2D characters usually have highly distinctive traits, which are lost in a 3D transposition. An iconic example is Mickey Mouse, whom ears in 2D appear circular no matter which way he is facing.

This research project investigates the automatic generation of 3D content by using existing 2D character animations. To maintain the 2D advantages in 3D, we propose a three-step approach: the generation of a 3D model for each perspective of each body part of the character; a registration method for each pair of models from adjacent perspective; the generation of the 3D animation from the 2D one.

Find out more about Simone here.

 

 

Download Simone's Research Profile


http://barbierisimone.com


2015
Joanna Tarko
Joanna Tarko
Supervisor:
Dr Christian Richardt
Industrial Supervisor:
Tim Jarvis

Graphics Insertions into Real Video for Market Research

Industrial Partner

CheckmateVR

Research Project: Graphics Insertions into Real Video for Market Research

Often, when asked, people can't explain why they bought a specific product. The aim of market research is to design research methods that help to explain why the decision was made. In a perfect scenario, study participants would be placed in a real, but fully-controlled shopping environment; however, in practice, such environments are very expensive or even impossible to build. Virtual reality (VR) environments, in turn, are fully controllable and immersive, but they lack realism.

My project is on combining video camera footage with computer-generated elements to create sufficiently realistic (or plausible) but still controlled environments for market research. The computer graphics elements can range from texture replacement (as on the screen of a mobile phone) through to complete three-dimensional models of buildings (such as a petrol station). More commonly, billboards, posters and individual product items comprise the graphics models. After working with standard cameras, I focused on 360° cameras (cameras that capture everything around them in every direction), which are rapidly gaining in popularity, and may provide a good replacement for VR content in terms of immersion.​

MSc Digital Entertainment - Masters project:

Matchmoving on set with the use of real-time visual-intertial localization and depth estimation, working with Dr Neill Campbell

Background: Applied Physics/ Computer Graphics

Download Joanna's Research Profile


2015
Lazaros Michailidis
Lazaros Michailidis
Supervisor:
Dr. Emili Ballaguer-Balester
Industrial Supervisor:
Jesus Lucas Barcias

Neurogaming

Industrial Partner:

Sony Interactive Entertainment Europe

Research Project: Uncovering the physiological correlates of flow experience in virtual reality games

The purpose of this study is to investigate the physiological underpinnings of flow during virtual reality game playing. Flow is considered a highly desirable mental state, with links to creativity, increased performance and well-being. It constitutes the core experience of task engagement and is particularly relevant in video games.

Detecting and predicting flow in either real-time or offline settings can facilitate our understanding of design and usability parameters that will allow for an engaging and enjoyable experience in digital media use. By extension, adaptation of the video game at hand, based on the user's physiology, can help create compelling experiences that will maintain the user's concentration, motivation and replay intention. These are factors highly valued by game designers, as they can identify areas wherein the game has failed to stimulate an immersive experience.

Our research commenced with heart rate variability and electrooculography and extended to electroencephalography, all of which have been employed in a custom game tailored for virtual reality. Through this project, we aim to help experts create better and more engaging digital games that will benefit both sides: the player base and the industry.

Download Lazaros' Research Profile


2015
Thomas Joseph Matthews
Thomas Joseph Matthews
Supervisor:
Dr Feng Tian / Prof Wen Tang
Industrial Supervisor:
Tom Dolby

Semi-Automated Proficiency Analysis and Feedback for VR Training

Industrial Partner:

AI Solve

Research Project: Semi-Automated Proficiency Analysis and Feedback for VR Training

Virtual Reality (VR) is a growing and powerful medium that is finding traction in a variety of praxis. My research aims to tackle the specific aim of encouraging immersive learning and knowledge retention through short-form

Our project is streamlining the process of proficiency analysis in virtual reality training by using performance recording and data analytics to directly compare subject matter experts and trainees. Currently virtual reality training curriculums require at least post-performance review and/or direct supervised interpretation to provide feedback, whereas our system will be able to use expert performance models to direct feedback towards trainees’ strengths and weaknesses, both in a specific scenario and across the subject curriculum.

Using an existing virtual reality training scenario developed by AiSolve and Children’s Hospital Los Angeles, subject matter experts will complete multiple

performance variations in a single scenario. This provides a scenario action graph which is then used as a baseline to measure trainee performances against, noting significant variants in attributes like decision-making, stimuli perception and stress management. We will validate the system using objective and subjective accuracy metrics, implementation feasibility and usability measures.

More information on the VRSims framework this project is attached to can be found on the AiSolve website: http://www.aisolve.com/enterprise/

 

 

 

Download Thomas' Research Profile


http://www.aisolve.com/


2015
Ifigeneia Mavridou
Ifigeneia Mavridou
Supervisor:
Dr. Emili Ballaguer-Balester, Dr Ellen Seis, Dr Alain Renaud
Industrial Supervisor:
Dr Charles Nduka

Emotion and engagement analysis of virtual reality experiences

Industrial Partner:

Emteq

Our emotions are at the core of human existence, yet there are many questions to be answered about how our emotions affect what we feel, think and do. Virtual reality (VR) represents an ideal technology for studying human behaviour, and for people to experience things that would be otherwise impossible. As the visual and audio stimuli and level of realism are under complete creative control, most aspects of the user experience can be precisely measured. Understanding and measuring the emotional responses of an individual immersed in room-scale, free-walking, VR scenarios could provide the ultimate laboratory for behavioural sciences and user experience research.  

 

 

For this project, we are closely collaborating with Emteq Ltd., in order to develop a system for emotion detection in VR using physiological signals and behavioural data. This work has assisted in the further development of a novel wearable device called “EmteqVR” consisting of physiological sensors that can read the emotional responses of the wearer. Our emotion detection approach utilising machine learning processes, is based on the 2-dimensional model comprising of the dimensions of valence and arousal. The outcomes of this research will be used to provide baseline data to inform future VR research and the development of mental healthcare applications.

 

Download Ifigeneia's Research Profile


2015
Naval Bhandari
Naval Bhandari
Supervisor:
Prof Eamonn O'Neill
Industrial Supervisor:
Simon Luck

Enhancing user interaction with data using AR/MR

Industrial Partner:

BMT Defence Services

Research Project:

An exploration into the enhancement of dimensionality, interactivity, and immersivity within augmented and virtual reality

This research explores whether using augmented or mixed reality status and instructional information based on geographic, positional or other data is beneficial to end users/ the organisation. The information may be based upon realistic scenarios such as initial operating procedures from documentation used by Ministry of Defence that is managed by BMT. The user is required to understand information through this mechanism, interact with it (e.g. gesturally), and manipulate it to progress through tasks and activities. The research looks at innovations in how to consume the base data, how best to store then represent this to the user to enable hands-free interaction, how to track actions completed centrally and what devices to use to make this effective for end users. This will include rapid prototyping and evaluation of systems.

Background: Computer Science

BSc (Hons) Computer Science, University of Leeds

Download Naval's Research Profile


2015
Javier Dehesa
Javier Dehesa
Supervisor:
Julian Padget
Industrial Supervisor:
Andrew Vidler

Modelling human--character interaction in virtual reality

Industrial Partner

Ninja Theory

Research Project: Modelling human--character interaction in virtual reality

Interaction design in virtual reality is difficult because of the typical nature of the input (tracked head and hands position) and the freedom of action of the user. In the case of interaction with virtual (human-like) characters, generating plausible reactions under every circumstance generally requires intensive animation work and complex hand-crafted logic, sometimes also imposing limitations on the world design. We address the problem of real-time human–​character interaction in virtual reality proposing a general framework that interprets the intentions of the user in order to guide the animation of the character.

Using a vocabulary of gestures, the framework analyses the head and hands 3D input provided by the tracking hardware and decides the actions that the character should take, which are then used to synthesise an appropriate animation for the scene. We propose a novel combination of data-driven models that perform the tasks of gesture recognition and character animation, guided by simple logic describing the interaction scenarios.

We consider the problem of sword fighting in virtual reality as our case study and identify other potential applications and extensions. We expect our research to establish an academically and industrially valuable interaction framework, while also providing novel insights in real-time applications of machine learning to virtual reality environments.

Background: Mathematics/ Computer Science

Masters Degree Mathematics and Compuer Science, Universidad de Cantabria

Download Javier's Research Profile


2015
Tom Matko
Tom Matko
Supervisor:
Prof Jian Chang
Industrial Supervisor:
John Leonard, Wessex Water

Flow Visualisation of Computational Fluid Dynamics Modelling

Industrial Placement:

Wessex Water

Research Project:

Flow Visualisation of CFD Modelling Of Aeration Bioreactors In Activated Sludge Wastewater Treatment

This multi-disciplinary project in engineering and computer vision is based in the Centre for Digital Entertainment (CDE), which is an EngD programme; and a collaboration between the University of Bath and Bournemouth University. The project is co-supervised by an industrial company, Wessex Water (WW), the Water Innovation and Research Centre (WIRC) at the University of Bath, and the National Centre of Computer Animation (NCCA) at Bournemouth University.

WW invest in new state-of the art aeration systems that enable better understanding of the aeration treatment process. WW recognises that existing designs of oxidation ditches can potentially have a detrimental effect on hydrodynamics and oxygen uptake efficiency. New aeration systems in oxidation ditches are of interest to WW.

This project uses Computational Fluid Dynamics (CFD) and flow visualisation modelling to increase the understanding of optimal operating and flow conditions in aeration bioreactors. CFD models are an important mathematical and numerical tool for predicting multiphase flows and aeration conditions in activated sludge bioreactors, in order to enable process design improvements and retrofit of existing technology. This project focuses on predicting dissolved oxygen and bubble plume distributions, which are key indicators of aerobic conditions in bioreactors. Fluid graphical visualisation enables a better understanding of the hydrodynamics and the retrofitting of existing aeration technology design.

Download Tom's Research Profile


2014
Asha Ward
Asha Ward
Supervisor:
Dr Tom Davis
Industrial Supervisor:
Luke Woodbury, Dotlib

Music Technology for users with Complex Needs

Indusrial Partner:

Three Ways School

Research Project: Music Technology for users with Complex Needs

Music is essential to most of us, it can light up all areas of the brain, help develop skills with communication, help to establish identity, and allow a unique path for expression. People with complex needs can face barriers to participation with music-making and sound exploration activities when using instruments and technology aimed at typically able users. My research explores the creation of novel and bespoke hardware and software to allow accessibility to music creation for those with cognitive, physical, or sensory impairments and/or disabilities.

Using tools like Arduino and sensor based hardware, alongside software such as Max/MSP and Ableton Live, the aim is to provide innovative systems that allow for the creation of personal instruments that tailor to individual needs and capabilities. These instruments can then be used to interact with sound in new ways not available with traditional acoustic instruments. Technology can be used to turn tiny movements into huge sounds and tangible user interfaces can be used to investigate the relationship between the physical and digital world, leading to new modes of interaction.

Working with my industrial sponsor the Three Ways School in Bath and industrial mentor Luke Woodbury of Dotlib, my research will take use an Action Research methodology to create bespoke, tangible tools that combine hardware and software allowing central users, and those facilitating, to create and explore sound in a participatory way.

 

 

Download Asha's Research Profile


2014
Mark Moseley
Mark Moseley
Supervisor:
Dr Leigh McLoughlin
Industrial Supervisor:
Sarah Gilling

Industrial Placement:

Victoria Education Centre

Research Project:

Using Assistive Technology accessible assessments, eye gaze, robotics and haptics to identify the knowledge of physical world concepts of those who have profound motor impairments

My research is in the area of Assistive Technology (AT) and disability:

Cognitively able children and young people who have profound motor impairments and who are unable to speak (the target group or TG) face many barriers to learning, communication, personal development, physical interaction and play experiences compared to their typically developing peers.

Physical interaction and play are known to be important components of child development but this group currently has few suitable ways in which to participate in these activities.

The TG may have knowledge about real world physical concepts despite having limited physical interaction experiences, but it can be difficult to reveal this knowledge.  Conventional assessment techniques are not suitable for this group, largely due to accessibility issues i.e. most existing assessments require a verbal answer or physical gesture such as pointing to an answer, neither of which are appropriate for this group.

Working with a team of Speech and Language Therapists, two new digital AT accessible assessments were created for this research.

An intervention was developed which enabled the TG to experience simulated physical interaction.  This intervention involved the participants using an eye gaze controlled robotic arm with haptic feedback to complete a set of tasks.

The assessments and the intervention were trialled with staff at Victoria Education Centre and then used with 2 participants from the TG.

Find out more about when Mark travelled to the University of Alberta, Canada as part of his research project here.

 

 

 

Download Mark Moseley's Research Profile


2014
Ieva Kazlauskaite
Ieva Kazlauskaite
Supervisor:
Dr Neill Campbell, Prof Darren Cosker
Industrial Supervisor:
Dr Tom Waterson

ML for character animation and motion style synthesis

Industrial Partner

Electronic Arts (EA), Frostbite team

Research Project

Machine Learning for character animation and motion style synthesis

The project investigates the use of machine learning techniques to improve the interactrive animation of game characters based on motion capture data.

Learning from sequential data is challenging as data might be sampled at different and uneven rates, sequences might be collected out of phase, etc. Consider the following scenarios: humans performing a task may take more or less time to complete parts of it, climate patterns are often cyclic though particular events take place at slightly different times in the year, the mental ability of children varies depending on their age, neuronal spike waveforms contain temporal jitter, replicated scientific experiments often vary in timing.  However, most sample statistics, e.g. mean and variance, are designed to capture variation in amplitude rather than phase/timing. This leads to increased sample variance, blurred fundamental data structures and an inflated number of principal components needed to describe the data. Therefore, the data needs to be aligned in order for dependencies such as these to be recovered. This is a non-trivial task that is often performed as a pre-processing stage to modelling. ​Further difficulties arise when the the dataset contains observations from several distinct sequences. Consider, for example, a set of motion capture experiments that include tasks such as running, jumping and sitting down. Data for each of these three types of sequence can be aligned to themselves but a global alignment between them may not exist. 

In this project we analyse the aforementioned scenarios using probabilistic non-parametric approaches.

Background: Mathematics

Master's degree, Mathematics (MMath), University of Durham


2014
Garoe Dorta Perez
Garoe Dorta Perez
Supervisor:
Dr Neill Campbell, UoB, Dr Lourdes Agapito, UCL
Industrial Supervisor:
Dr Sara Vicente, Dr Ivor Simpson

Learning models for intelligent photo editing

Industrial Partner

Anthropics Technology Ltd

Research Project

Learning models for intelligent photo editing

My main research interests lie in the areas of machine learning and computer vision. My project at Anthropics Technology Ltd. involves face modelling applications using deep neural networks (DNN), which ties in with the software produced at the company, that is centred around human beauty with a special focus on facial analytics.

The goal of the project is to develop novel computer vision and graphics technologies that enable users to intuitively edit photos to produce professional quality results. Photo editing applications need to be simple for a user to interact with, but sufficiently flexible to remove any flaws without introducing artefacts. This project will develop vision models for extracting information about the objects in the scene and graphics models to provide realistic user driven image enhancements. Image synthesis will likely be employed for generating data to train the models and for creating novel image effects.

 

Download Garoe's Research Profile


http://people.bath.ac.uk/gdp24/


2013
Rahul Dey
Rahul Dey
Supervisor:
Dr Christos Gatzidis
Industrial Supervisor:
Jason Doig

New Games Technologies

My research focuses on using real time voxelization algorithms and procedurally creating content in voxel spaces. Creating content using voxels is more intuitive than polygon modelling and possesses a number of other advantages. This research intends to provide novel methods for real time voxelization and subsequently editing them using procedural generation techniques. These methods will also be adapted for next generation consoles and take advantage of the features that they expose.


2013
Adam Boulton
Adam Boulton
Supervisor:
Dr Rachid Hourizi, Prof Eamonn O'Neill
Industrial Supervisor:
Alice Guy

The Interruption and Abandonment of Video Games

Industrial Partner:

PaperSeven

Research Project

The cost of video game development is rapidly increasing as the technological demands of producing high quality games grow ever larger. With budgets set to spiral into the hundreds of millions of dollars, and audience sizes rapidly expanding as gaming reaches new platforms, we investigate the phenomenon of task abandonment in games. Even the most critically acclaimed titles are rarely completed by even half their audience. With the cost of development so high, it is more important than ever that developers, as well as the players, get value for money. We ask why so few people are finishing their games, and investigate whether anything can be done to improve these numbers.

Background: Computer Science

BSc Computer Science, University of Cardiff


2013
Tom Smith
Tom Smith
Supervisor:
Dr Julian Padget
Industrial Supervisor:
Andrew Vidler

Procedural content generation for computer games

Industrial Partner

Ninja Theory

Research Project

Procedural content generation for computer games

Procedural content generation (PCG) is increasingly used in games to produce varied and interesting content. However PCG systems are becoming increasingly complex and tailored to specific game environments, making them difficult to reuse, and so we investigate ways to make the PCG code reusable and allow simpler, usable descriptions of the desired output. By allowing the behaviour of the generator to be specified without altering the code, we provide increasingly data-driven, modular generation. We look at reusing tools and techniques originally developed for the semantic web, and investigate the possibility of using them with industry-standard games development tools.

Background: Computer Science

Master of Engineering (MEng), Computer Science with Artificial Intelligence, University of Southampton


2013
Tom Wrigglesworth
Tom Wrigglesworth
Supervisor:
Dr Leon Watts, Dr Simon Jones
Industrial Supervisor:
Lucy May Maxwell

Towards a Design Framework for Museum Visitor Engagement with Historical Crowdsourcing Systems

Imperial War Museum

I am researching how novice users engage with online museum collections through crowd-sourcing initiatives. My project is in collaboration with the Imperial War Museums and is primarily focused on the American Air Museum website - a large online archive of media and information that accommodates crowd-sourced contributions. My research interests are in Human-Computer Interaction, Research Through Design methodologies and encounters with cultural heritage through web-browser based technologies.


2013
Zack Lyons
Zack Lyons
Supervisor:
Dr Leon Watts
Industrial Supervisor:
Prof Nigel Harris

Virtual Therapy for Aqcuired Brain Injury Rehabilitation

Industrial Partner:

Designability / Brain Injury Rehabilition Trust

Research Project:

Virtual Therapy - A Story-Driven and Interactive Virtual Environment for Acquired Brain Injury Rehabilitation

An estimated 350,000 people are affected in the UK each year by an acquired brain injury (ABI). When such injuries affect frontal lobe areas, a person can start to exhibit challenging behaviours that preclude community integration. Even seemingly basic everyday tasks, such as buying a bus ticket or searching for a shop, can be profoundly difficult and highlight significant behavioural obstacles to overcome. A crucial concern for clinicians is therefore to assess such obstacles by witnessing how well people with an ABI manage apparently routine tasks.

Our research has generated an immersive and interactive virtual environment that places people with ABIs into a realistic community setting. The environment challenges their ability to organise tasks, think creatively about solutions, and seek answers through social interactions. By delivering tasks that mirror the demands of the real world, clinicians may be able to better predict how people will behave in the community and train them to overcome these difficulties.

 

Download Zack's Research Profile




© Centre for Digital Entertainment 2019. Site by MediaClash.