Our students and projects

We currently have nearly 50 live projects with 40 companies .

Our proactive Research Engineers (REs) are involved in everything from procedural generation of content for international games companies to assistive technologies to help stroke rehabilitation via the future of interactive technologies for major broadcasters and virtual reality for naval training.

Specialist areas include:

Virtual and Augmented Reality applications, eye-tracking, automatic creation of 3D content, stop motion animation, assistive technology, brain rehabilitation, procedural content generation, real-time rendering, texture-mapping, serious games, UIX, HCI, voxels, fluid simulation, GPGPU programming, parallel computing, motion and facial capture, medical imaging, volumetric visualisation, interactive content, participatory design, the internet of things and automatic musculoskeletal simulation – among many others!

Get to know some of our REs below and see our fresh new video filmed during our CDE Winter Networking Event at the British Film Institute, London.  Hear more from our students and alumni.


2018
Neerav Nagda
Neerav Nagda

Background:

BA (Hons) Computer Visualisation and Animation


2018
Katarzyna Wojna
Katarzyna Wojna

Research Interest:

I have a strong background in 3D/visual computing and HCI. The EngD in Digital Entertainment offers me an opportunity to carry out research in the field of digital entertainment but with a practical application, working on real world problems; I have a particular interest in the use of games technology in healthcare.

Background: Computer Science

I am an honours graduate from the Computer Games Modelling and Animation undergraduate course at the University of Derby (with Best student submission for the Department of Mathematics and Computer Science) and completed my Masters in Human Computer Interaction at the University of Nottingham. This has given me a very strong background in 3D/visual computing and HCI.

Programming /Design Skills:

Modelling/texturing software (eg Autodesk Maya/3DMax/3DCoat); Gaming Engines (Unreal or Unity); Sculpting software (Zbrush/Autodesk Mudbox); Rendering software (Mental Ray/Keyshot/Marmoset Toolbag); Visual effects (Adobe AfterEffect); 2D (Adobe Photoshop/Illustrator/InDesign)

Human computer systems: Mixed Reality Technologies; UI/UX

Portfolio: https://www.artstation.com/artist/katty


2018
Jack Brett
Jack Brett
Supervisor:
Dr Christos Gatzidis

Augmented Music Interaction and Gamification

Industrial Partner:

Roli

Background: Games Technology

Games Technology (First Class Honours BSc).  My previous research work was conducted mostly with the Psychology Department where programs were created for mobile/PC use and then later branched into virtual reality.  Most recently, I have been focusing on a VR program which is used in clinical trials to gauge the severity of certain mental illnesses such as dementia.

My dissertation focused on the learning process/curves of certain games engines in which I compared the development process and performance of different engines/SDKs.  I am now undertaking my placement at Roli, writing a thesis about the barriers of entry to creating music as well as assessing how one learns an instrument at beginner level.

I enjoy developing mobile games when time is provided and enjoying playing around with UE4 to test different player mechanics within a virtual world.


http://jackbrett.co/


2018
Karolina Pakenaite
Karolina Pakenaite

Background: Maths/ Computer Science

MSci Mathematics with a Year in Computer Science, Birmingham


2018
Olivia Ruston
Olivia Ruston

Background: Computer Science

BSc Computer Science, University of Bath


2018
Andrea Maiellaro
Andrea Maiellaro

Research Interest

I am interested in designing videogames that are as realistic as possible, closer to player reality such that it is not percieved to be a game.
 

Background: Computer Science

Masters in Computer Science, University of Bari


2018
Farbod Shakouri
Farbod Shakouri

Connected Tangible Objects for Immersive Augmented Reality

I’m investigating methods of interaction with tangible objects in immersive Augmented Reality (AR) narratives. AR has become a prevalent technology in the games industry; a medium that submerges interactive virtual information with our physical environment. Various systems have produced functionalities that explore methods for interacting with AR and narratives. However, little research has been carried out to tackle the challenges of enabling Internet of Tangible Things (IoTT) (Angelini, et al. 2017) to be aware of real-time virtual entities in the context of immersive narratives.

Background: Games Technology

BSc (Hons) Games Technology, Bournemouth University - with one year placement at my startup company.

Research Assistant for Corpus Quod:

I explored Augmented Reality (AR) and immersive technologies for interactive narratives, developing new approaches for immersive performance experience and ways of documenting and assessing its consumption. By designing and implementing an AR immersive experience prototype that was intended to capture the attitudes of audience-participants towards refugees and asylum seekers (RAS) and the UK-based asylum process.

Research Project: Avebury Portal.

My research project focused on using location-based Augmented Reality application to enhance users’ experience at an archaeological heritage site - by creating a mystery treasure hunt, enabling users to use the environment for clues in order to solve the underlying puzzles.

Research Gate  


2018
Aaron Demolder
Aaron Demolder

Improving data capture and 3D integration for VFX

Background:

BA (Hons) Computer Animation and Visualisation

 

I'm interested in better incorporating emerging technology into art driven pipelines, including everything from mobile sensors to lidar units.
This could be traditional VFX or expand the possibilities of real-time capture and performance.

Having learnt all the skills to become a 3D generalist and explored pretty much the entirety of the 3D pipeline, I want to improve artist's experiences with content creation.<o:p></o:p>


https://aarondemolder.com


2018
Raphael Fernandes
Raphael Fernandes

Research Interest: AI / ML

My research areas of interest are in Artificial Intelligence (AI) and machine learning; I have a particular interest in using ML in character control and in procedural content generation.

I have had a deep interest in video game culture for a long time and became interested in the development of video games during my degree, so much so that I started to tinker around with packages such as GameMaker studio and Unity. I have since been looking for opportunities to enter the field of digital entertainment; the CDE programme fits perfectly with my passion for and interest in video game technology. The idea of working with a company for my EngD and getting some real hands-on experience was one of the most attractive prospects of the programme.

Background: Physics

BSc (Hons) Physics, University of Leicester, specialising in numerical modelling of physical systems.

My final year project involved the modelling of the electrostatic properties of 2D materials such as graphene when subject to external electric fields. Developing such a model allowed me to familiarise myself with numerical modelling techniques used in the field of scientific computing and programming languages such as C, R and Python.

Programming /Design Skills:

C; Python; R; Matlab; GML

GameMaker Studio 2; Unity


2018
Sydney Day
Sydney Day

Background:

BA (Hons) Computer Animation and Visualisation


2018
Robert Kosk
Robert Kosk

Background:

BA (Hons) Computer Visalisation and Animation


2018
Graham Rigler
Graham Rigler

Background:

BSc (Hons) Games Programming


2017
Victor Ceballos Inza
Victor Ceballos Inza

Research Interests: Geometry processing with deformable objects

My research areas of interest include Geometry Processing - mesh processing with deformable objects, Computer Animation and Visual Effects, and, in particular, their application in the film industry.

Masters Project: Procedural Modelling

Artists can sometimes find using procedural modelling systems not intuitive​, as these applications rely on the technical skills of the user. Incorporating differentiation of the produced geometry, we can build a system that allows a more direct manipulation of the models. We show that such a system can be built to run efficiently in real time.  This project builds on a previous dissertation carried out at UCL. We seek to improve the existing application in terms of efficiency, as well as to add new functionality, including the support of novel procedural rules and high-order differentiation. Working with Dr Yongliang Yang.

Background: Maths, AI and Computer Graphics

BSc in AI & Maths, University of Edinburgh.

MSc in Computer Graphics, Vision & Imaging, University College London.

Research Assistant, Toshiba Healthcare, Edinburgh working on the application of Computer Vision techniques to healthcare, in for example to the detection of falls in the elderly;

Universitat Politècnica de Catalunya, Barcelona  research on the analysis of colonic content for diagnosis


2017
Sameh Hussain
Sameh Hussain
Supervisor:
Prof Peter Hall
Industrial Supervisor:
Andrew Vidler

Learning to render in style

Industrial Partner:

Ninja Theory

Research Project:

Procedural generation: Investigations into real-time applications of style transfer incorporating inference of contextual details to produce stylistic and/or artistic post-processing effects.

Style transfer techniques have provided the means of re-envisioning images in the style of various works of art. However, these techniques can only produce credible results for a limited range of images. As there is no consideration for the contextual details within the image, current style transfer techniques do not produce temporally coherent results. For example, the application of a painterly effect requires the consideration of the objects within an image, the effect needs to be applied in such a way that these objects are still recognisable. We are researching the development of style transfer techniques that will take into account the contextual details with the ultimate aim of creating post-processing effects that can be used in the digital entertainment industry.

MSc Digital Entertainment - Masters project: 

A parametric model for linear flames, working with Prof Peter Hall

Background: Mechanical Engineering

MEng in Mechanical Engineering, University of Bath; one year placement with Airbus Space and Defence developing software to monitor and assess manufacturing performance.


2017
Valentin Miu
Valentin Miu
Supervisor:
Dr Oleg Fryazinov

Industrial Partner:

Playfusion Ltd

Research Interests: Application of neural networks

I am implementing a convolutional neural-network-based realtime 3D fluid simulator in a gaming engine, using an algorithm devised by Google. I am interested in neural networks and their applications in simulation acceleration, computer vision and artificial intelligence, in fields such as VFX, gaming, virtual reality and augmented reality. 

Background: Physics

IMSci Physics, University of Glasgow, graduating with a 1st degree. During this time I familiarized myself with compositing and 2D/3D animation, in a non-professional setting. In my first year at the CDE, I successfully completed masters-level courses in Maya, OpenGL and Houdini, and have been learning CUDA GPU programming and machine learning. 


http://miu-v.com/


2017
Rory Clark
Rory Clark
Supervisor:
Dr Feng Tian
Industrial Supervisor:
Adam Harwood

VR and AR applications for Ultrahaptics technology

Industrial Partner:

Ultrahaptics

Research Project:

VR and AR applications for ultrahaptics technology

Background: Games Programming

BSc Games Programming, Bournemouth University, focusing on the use and development of; games and game engines, graphical rendering, 3D modelling, and a number of programming languages. Final year dissertation on virtual reality event planning simulation, utilising the HTC Vive. Previous projects on systems ranging from the web and mobile, to smart-wear devices and VR headsets.


https://rory.games


2017
Marcia Saul
Marcia Saul
Supervisor:
Dr Emili Balaguer-Ballester

Industrial Partner:

If you are interested in sponsoring and supporting research of machine learning and neural networks, particularly in their application in user behaviours, recognition in computer games or in the implementation of intelligent games for cognitive training please contact Mike Board Bournemouth CDE Project Manager

Research Interests: Medical applications of computational neuro-science 

My main field of interest is computational neuroscience, brain-computer interfaces and machine learning with the use of games in applicaitions for rehabilitation and improving the quality of life for patients/persons in care.

MRes - Masters project:

Using computational proprioception models and artificial neural networks in predictive two-dimensional wrist position methods.

Background: Psychology and Computational Neuroscience

BSc in Biology with Psychology, Royal Holloway University of London,

MSc in Computational Neuroscience & Cognitive Robotics, University of Birmingham.


2017
Thomas Williams
Thomas Williams
Supervisor:
Dr Elies Dekoninck, Dr Simon Jones, Dr Christof Lutteroth
Industrial Supervisor:
Prof Nigel Harris

AR as a cognitive prosthesis for people living with dementia

Industrial Partner:

Designability

Research Project:

Investigating the use of Augmented Reality as a cognitive prosthesis for people living with dementia

There have been considerable advances in the technology and range of applications of virtual and augmented reality environments. However, to date, there has been limited work examining design principles that would support successful adoption (Gandy 2017). Assistive technologies have been identified as a potential solution for the provision of elderly care. Such technologies have in general the capacity to enhance the quality of life and increase the level of independence among their users. 

The aim of this research project is to explore how augmented reality (AR) could be used to support those with dementia with daily living tasks and activities. This will specifically focus on those living with mild to moderate dementia and their carers. Designability have been working on task sequencing for different types of daily living tasks and have amassed considerable expertise in how to prompt people with cognitive difficulties, through a range of everyday multi-step tasks (Boyd 2015). This project would allow us to explore how AR technology could build on that expertise.

The research will involve developing new applications for use with augmented reality technology such as the Microsoft HoloLens, Samsung AR or Meta 2. These augmented reality technologies are all still in their early stages of technology maturity, however they are at the ideal stage of development to explore their application in such a unique field as assistive technology.

MSc Digital Entertainment - Masters project:

A novel gaze tracking system to improve user experience at Cultural Heritage sites, with Dr Christof Lutteroth

Background: Maths/Physics

University of Bath BSc (Hons) Mathematics and Physics Four years with placement


http://blogs.bath.ac.uk/ar-for-dementia/


2017
Michelle Wu
Michelle Wu

Industrial Partner:

If you are interested in sponsoring and supporting research on using neural networks to improve the process of using mocap data in the generation of high quality animation please contact Mike Board Bournemouth CDE Project Manager

Research Interest: Motion synthesis with Neural Networks

I am interested in Motion and Performance Capture and how Machine Learning algorithms can be applied to motion data for application in the VFX and game industries. My current research project is focused on the design of a framework for character animation synthesis from content-based motion retrieval. The project's aim is to reuse collections of human motion data, exploiting unsupervised learning for training an effective motion retireval method. It will provide animators with more control over the generation of high quality animations, using Neural Networks for motion synthesis purposes.

Background:  Computer Animation, Games and Effects

BSc Software Development for Animation, Games and Effects, Bournemouth University.

Research Assistant in Human Computer Interaction/Computer Graphics in collaboration with the Modelling Animation Games, Effects (MAGE) group within the National Centre for Computer Animation (NCCA), focusing on the development and dissemination of the SHIVA Project, a software that provides virtual sculpting tools for people with a wide range of disabilities.


2017
Alexandros Rotsidis
Alexandros Rotsidis
Supervisor:
Prof Peter Hall; Dr Christof Lutteroth
Industrial Supervisor:
Mark Lawson

Creating an intelligent animated avatar system

Industrial Partner:

Design Central (Bath) Ltd t/a DC Activ / LEGO

Research Project:

Creating an intelligent avatar: using Augmented Reality to bring 3D models to life. The creation of 3D intelligent multi-lingual avatar system that can realistically imitate (and interact with) Shoppers (Adults), Consumers (Children), Staff (Retail) and Customers (Commercial) as users or avatars. Using different dialogue, appearance and actions based on given initial data and feedback on the environment and context in which it is placed creating ‘live’ interactivity with other avatars and users.

While store assistant avatars and virtual assistants are commonplace in present times, they act in an often scripted and unrealistic manner. These avatars are also often limited in their visual representation (ie usually humanoid).

This project is an exciting opportunity to apply technology and visual design to many different 3D objects to bring them to life to guide and help people (both individually and in groups) learn from their mistakes in a safe virtual space and make better quality decisions increasing commercial impact.

Masters Project: AR in Human Robotics

Augmented Reality used in Human Robotics Interaction, working with Ken Cameron

Background: Computer Science

BSc (Hons) Computer Science from Southampton University; worked in the industry for 5 years as a web developer. A strong interest in Computer Graphics and Machine Learning led me to the EngD programme.


http://www.alexandrosrotsidis.com/


2017
Kenneth Cynric Dasalla
Kenneth Cynric Dasalla
Supervisor:
Dr Christian Richardt, Dr Christof Lutteroth
Industrial Supervisor:
Jack Norris, Chris Price

Mixed Reality Broadcast Solutions

Industrial Partner:

ZubrVR

Research Project:

Exploring the use of realtime depth-sensing camera and positional tracking technologies in video for Mixed Reality Broadcast Solutions - technologies, workflows, implications for content creation

The project aims to investigate the use of depth-sensing camera and positional tracking technologies to dynamically composite different visual content in real time for mixed-reality broadcasting applications. This could involve replacing green-screen backgrounds with dynamic virtual environments, or augmenting 3D models into a real-world video scene. A key goal of the project is to keep production costs as low as possible. The technical research will therefore be undertaken predominantly with off-the-shelf consumer hardware to ensure accessibiity.At the same time, the developed techniques also need to be integrated with existing media production techniques, equipment and approaches, including user interfaces, studio environments and content creation.

Follow the project Blog

MSc Digital Entertainment - Masters Project:

Multi-View High-Dynamic-Range Video, working with Dr Christian Richardt

Background: Computer Science

BSc in Computer Science, Cardiff University specializing in Visual Computing. Research project on Boosting Saliency Research on the development of a new dataset which includes multiple categorised stimuli and distortions. Fixations of multiple observers on the stimuli were recorded using an eye tracker.


https://zubr.co/author/kenneth/


2016
John Raymond Hill
John Raymond Hill
Supervisor:
Prof Wen Tang

Holovis Flight Deck Officer VR Simulation System

I've always been excited by technologies which let us exceed our biological limitations and Virtual Reality offers endless possibility to achieve this. My research interests are in bringing down the barriers for communication between our senses and virtual environments to increase what we're able to experience and accomplish in them.

I am now a second-year student at Bournemouth University after coming to this course with a BSc in Computer Science and a few years out of academia. Please feel free to get in touch.


2016
Padraig Boulton (Paddy)
Padraig Boulton (Paddy)
Supervisor:
Prof Peter Hall, Dr Kwang In Kim
Industrial Supervisor:
Alex Jolly

Recognition of Specific Objects Regardless of Depiction

Industrial Partner:

Disney Research

Research Project:

Recognition of Specific Objects Regardless of Depiction

Recognition numbers among the most important of all open problems in Computer Vision. State of the art using neural networks is achieving truly remarkable performance when given real world images (photographs). However, with one exception, the performance of each and every mechanism for recognition falls significantly when the computer attempts to recognise objects depicted in non-photorealistic form. This project addresses that very important literature gap by developing mechanisms able to recognise specific objects regardless of the manner on which they are depicted. It builds on state of the path which is alone in generalising uniformly across many depictions.

 

In this case, the objects of interest are specific objects rather than visual object classes, and more particularly the objects represent visual IP as defined by the Disney corporation. Thus an object could be “Mickey Mouse”, and the task would be to detect “Mickey Mouse” photographed as a 3D model, as a human wearing a costume, as a drawing on paper, as printed on a T-shirt and so on. Currently we are investigating how different art styles map salient information of object classes or characters, and using this to develop a recognition framework that can use examples from artistic styles to learn domain agnostic classifier capable of generalising to unseen depictive styles.

MSc Digital Entertainment - Masters project:  

Undoing Instagram Filters : Creating a generative adversarial network (GAN) which takes a filtered Instagram photo and synthesizes an approximation of the original photo. The project sought to recreat state-of-the-art and evaluate the suitability of TensorFlow and Torch.

Background: Automotive Engineering

MEng Automotive Engineering, Loughborough university.  During that time I worked in motorsport aerodynamics for an industrial placement. Outside of university, my main interest is surfing (and luckily Bath is a lot closer to UK surf spots that Loughborough). 


2016
Kyle Reed
Kyle Reed
Supervisor:
Prof Darren Cosker; Dr. Kwang In Kim
Industrial Supervisor:
Dr Steve Caulkin

Improving Facial Performance Animation using Non-Linear Motion

Industrial Partner:

Cubic Motion

Research Project:

Cubic Motion is a facial tracking and animation studio, most famous for their real-time live performance capture. The aim of this research is to improve the quality of facial motion capture and animation through the development of new methods for capture and animation.

We are investigating the utilisation of non-linear facial motion observed from 4D facial capture for improving the realism and robustness of facial performance capture and animation. As the traditional pipeline relies on linear approximations for facial dynamics, we hypothesise that using observed non-linear dynamics will automatically factor in subtle nuances such as fine wrinkles and micro-expressions, reducing the need of animator handcrafting to refine animations.

Starting with developing a pipeline for 4D Capture of an performer’s range of motion (or Dynamic Shape Space); we apply this information to various components of the animation pipeline including rigging, blendshape solving to performance capture and keyframe animation. We also investigate how by acquiring a Dynamic Shape Space of multiple individuals we can develop a motion manifold for the personalisation of individual expression, that can be used as a prior for subject-agnostic animation. Finally we validate the need of non-linear animation through comparison to linear methods and through audience perception studies.

 

MSc Digital Entertainment - masters project:

Using convolutional neural networks (CNNs) to predict occluded facial expressions when wearing head - mounted displays (HMDs) for VR. The project involves learning a non-linear motion manifold from facial performances, to introduce to a facial tracking and animation pipeline for better quality results.  Other non-linear methods include the use of physical anatomical face models.

Other projects I've been involved in focus on facial expression including learning personalised smiles from identity and user-authoring of expressions using genetic algorithms. 

Background: Computer Science

BSc (Hons) Computer Science with Industrial Placement Year, University of Bath. Technology Industrial Placement – Nomura International, London - Lead technician for Global Corporate Technical Services Web Portal implementation.  - Developer for Java EE Web Applications including an online database mgmt. service. Including front end development (Javascript, JSP, HTML/CSS).  - Liaison for Corporate Standards and Improvement division.


2016
Catherine Taylor
Catherine Taylor
Supervisor:
Prof Darren Cosker, Dr Neill Campbell
Industrial Supervisor:
Eleanor Whitley

Deformable objects for virtual environments

Industrial Partner:

Marshmallow Laser Feast

Research Project:

Deformable objects for virtual environments

There are currently no solutions at market that can rapidly generate a virtual reality 'prop' from a generic object, and then render it into an interactive virtual environment, outside of a studio. A portable solution such as this would enable creation of deployable immersive experiences where users could interact with virtual representations of physical objects in real time, opening up new possibilities for applications of virtual reality technologies in entertainment, but also in sports, health and engineering sectors.

This project combines novel alogrithmic software for tracking deformable objects, interactive stereoscopic graphics for virtual reality, and an innovative configuration of existing hardware, to create the Marshmallow Laser Feast (MLF) DOVE system. The project objective is to create turn-key tools for repeatably developing unique immersive experiences and training environments. The DOVE system will enable MLF to create mixed reality experiences such as live productions, serialised apps & VR products/experiences to underpin signiticant business growth and new job creation opportunities.

Background: Maths

BSc Mathematics, Edinburgh University; Dissertation on Cosmological Models


2016
Lewis Ball
Lewis Ball
Supervisor:
Prof Lihua You, Prof Jian Jun Zhang
Industrial Supervisor:
Dr Mark Leadbeater, Dr Chris Jenner

Material based vehicle deformation and fracturing

I studied Physics (BSc) and Scientific Computing (MSc) at the University of Warwick. 

I am mainly interested in real-time graphics and physics simulation with applications for interactive media.  I am currently with Reflections, a Ubisoft studio at Newcastle.


2016
Azeem Khan
Azeem Khan
Supervisor:
Dr Tom Fincham Haines, Dr James Laird
Industrial Supervisor:
Jose Paredes, Dr Dario Sancho

Procedural gameplay flow using constraints

Industrial Partner:

Ubisoft Reflections

Research Project:

Procedural gameplay flow using constraints

This project involves using machine learning to identify what players find exciting or entertaining as they progress through a level.  This will be used to procedurally generate an unlimited number of levels, tailored to a user's playing style.

Tom Clancy's The Division is one of the most successful game launches in history, and the Reflections studio was a key collaborator on the project. Reflections also delivered the Underground DLC, within a very tight development window. The key to this success was the creation of a procedural level design tool, which took a high level script that outlined key aspects of a mission template, and generated multiple different underground dungeons that satisfied this gameplay template. The key difference to typical procedural environment generation technologies, is that the play environment is created to satisfy the needs of gameplay, rather than trying to fit gameplay into a procedurally generated world.

The system using for TCTD had many constraints, and our goal is to develop technology that will build on this concept to generate an unlimited number of missions and levels procedurally, and in an engine agnostic manner to be used for any number of games. We would like to investigate using Markov constraints, inspired by the 'flow machines' research currently being undertaken by Sony to generate music, text and more automatically in a style dictated by the training material. http://www.flow-machines.com/ (other techniques may be considered)

Masters Project:

An Experimental Approach to the Complexity of Solving Bimatrix Games

Background: Physics

MSci Physics with Theoretical Physics, Imperial College


2015
Simone Barbieri
Simone Barbieri
Supervisor:
Xiaosong Yang, Zhidong Xiao
Industrial Supervisor:
Ben Cawthorne

During the early stages of design, the artists make sketches using paper and pencil. In fact, sketching is a natural and flexible interface to represent conceptual designs. There are several advantages in using pencil and paper:

  • there is no need, for the author, to acquire any special knowledge;
  • it is easy, for the author, to change the result;
  • the precision is not required to express an idea.

Thus, a system which involve a sketching interface requires the same advantages to be convenient, or, at least, benefits that are greater or comparable to them.

However, posing and modelling 3D characters from 2D input is a complex and open problem.

My idea is to create a system that allows the user to pose the character and, eventually, remodel each section by exploiting the character’s outline.

The proposed techniques will allow the user to draw a few simple sketches which will not only pose the character but also guide the detailed deformation on the shape flow, allowing him to draw just a partial outline of the character’s components, leaving untouched the other ones.

Find out more about Simone here.


2015
Naval Bhandari
Naval Bhandari
Supervisor:
Prof Eamonn O'Neill
Industrial Supervisor:
Simon Luck

Enhancing user interation with data using AR/MR

Industrial Partner:

BMT Defence Services

Research Project:

An exploration into the enhancement of dimensionality, interactivity, and immersivity within augmented and virtual reality

This research will explore whether using augmented or mixed reality status and instructional information based on geographic, positional or other data is beneficial to end users/ the organisation. The information may be based upon realistic scenarios such as initial operating procedures from documentation used by Ministry of Defence that is managed by BMT. The user is required to understand information through this mechanism, interact with it (e.g. gesturally), and manipulate it to progress through tasks and activities. This research will explore innovations in how to consume the base data, how best to store then represent this to the user to enable hands-free interaction, how to track actions completed centrally and what devices to use to make this effective for end users. This will include rapid prototyping and evaluation of systems.

 

Background: Computer Science

BSc (Hons) Computer Science, University of Leeds


2015
Thomas Joseph Matthews
Thomas Joseph Matthews
Supervisor:
Dr Feng Tian / Prof Wen Tang
Industrial Supervisor:
Tom Dolby

Semi-Automated Proficiency Analysis and Feedback for VR Training

Virtual Reality (VR) is a growing and powerful medium that is finding traction in a variety of praxis. My research aims to tackle the specific aim of encouraging immersive learning and knowledge retention through short-form

Our project is streamlining the process of proficiency analysis in virtual reality training by using performance recording and data analytics to directly compare subject matter experts and trainees. Currently virtual reality training curriculums require at least post-performance review and/or direct supervised interpretation to provide feedback, whereas our system will be able to use expert performance models to direct feedback towards trainees’ strengths and weaknesses, both in a specific scenario and across the subject curriculum.

Using an existing virtual reality training scenario developed by AiSolve and Children’s Hospital Los Angeles, subject matter experts will complete multiple performance variations in a single scenario. This provides a scenario action graph which is then used as a baseline to measure trainee performances against, noting significant variants in attributes like decision-making, stimuli perception and stress management. We will validate the system using objective and subjective accuracy metrics, implementation feasibility and usability measures.

More information on the VRSims framework this project is attached to can be found on the AiSolve website: http://www.aisolve.com/enterprise/

 


http://www.aisolve.com/


2015
Javier Dehesa
Javier Dehesa
Supervisor:
Julian Padget
Industrial Supervisor:
Andrew Vidler

Modelling human--character interaction in virtual reality

Industrial Partner

Ninja Theory

Research Project:

Modelling human--character interaction in virtual reality

Interaction design in virtual reality is difficult because of the typical nature of the input (tracked head and hands position) and the freedom of action of the user. In the case of interaction with virtual (human-like) characters, generating plausible reactions under every circumstance generally requires intensive animation work and complex hand-crafted logic, sometimes also imposing limitations on the world design. We address the problem of real-time human–​character interaction in virtual reality proposing a general framework that interprets the intentions of the user in order to guide the animation of the character.

Using a vocabulary of gestures, the framework analyses the head and hands 3D input provided by the tracking hardware and decides the actions that the character should take, which are then used to synthesise an appropriate animation for the scene. We propose a novel combination of data-driven models that perform the tasks of gesture recognition and character animation, guided by simple logic describing the interaction scenarios.

We consider the problem of sword fighting in virtual reality as our case study and identify other potential applications and extensions. We expect our research to establish an academically and industrially valuable interaction framework, while also providing novel insights in real-time applications of machine learning to virtual reality environments.

Background: Mathematics/ Computer Science

Masters Degree Mathematics and Compuer Science, Universidad de Cantabria


2015
Lazaros Michailidis
Lazaros Michailidis
Supervisor:
Dr. Emili Ballaguer-Balester

Neurogaming

Immersion is the psycho-cognitive state, which mediates the interaction of an individual with an activity. My research is dedicated to uncovering the brain correlates of immersion in video games with electroencephalography. It has been said that if the game has the capacity to instil immersion, it will be a significant indicator of its success. However, we have yet to determine what exactly happens while a player is immersed, whether this state truly contributes to increased performance, and what the developers can do to maintain it.

For this purpose, I have developed a custom, virtual reality game of the Tower Defence genre, designated for Playstation VR, and we will also employ Machine Learning to detect the sustenance and loss of this state.

 

The project is in collaboration with Sony Interactive Entertainment Europe


2015
Joanna Tarko
Joanna Tarko
Supervisor:
Dr Christian Richardt
Industrial Supervisor:
Tim Jarvis

Graphics Insertions into Real Video for Market Research

Industrial Partner

CheckmateVR

Research Project

Often, when asked, people can't explain why they bought a specific product. The aim of market research is to design research methods that help to explain why the decision was made. In a perfect scenario, study participants would be placed in a real, but fully-controlled shopping environment; however, in practice, such environments are very expensive or even impossible to build. Virtual reality (VR) environments, in turn, are fully controllable and immersive, but they lack realism.

My project is on combining video camera footage with computer-generated elements to create sufficiently realistic (or plausible) but still controlled environments for market research. The computer graphics elements can range from texture replacement (as on the screen of a mobile phone) through to complete three-dimensional models of buildings (such as a petrol station). More commonly, billboards, posters and individual product items comprise the graphics models. After working with standard cameras, I focused on 360° cameras (cameras that capture everything around them in every direction), which are rapidly gaining in popularity, and may provide a good replacement for VR content in terms of immersion.​

MSc Digital Entertainment - Masters project:

Matchmoving on set with the use of real-time visual-intertial localization and depth estimation, working with Dr Neill Campbell

Background: Applied Physics/ Computer Graphics


2015
Ifigeneia Mavridou
Ifigeneia Mavridou
Supervisor:
Dr. Emili Ballaguer-Balester, Dr Ellen Seis, Dr Alain Renaud
Industrial Supervisor:
Dr Charles Nduka

Emotion and engagement analysis of virtual reality experiences

I am interested in the emotion stimulation and the identification of methods for emotion recognition in Virtual Reality (VR) Environments. Currently at Emteq, I am working towards enhancing human-computer interaction using emotional states as an input modality by assisting the development of a facial sensing platform that measures emotions through facial gestures and biometric responses. Emotion stimulation is related to engagement and “Presence” in games and VR. These factors can assist in the creation of immersive experiences as well as, the efficient content design of a VR product in terms of re-playability. The acquisition and analysis of physiologic signals and facial expressions play an important role in my studies towards evaluating and measuring the dimensions of affect, and it’s relation to cognitive processes such as attention and memory. For my studies I will start a sequence of user-behaviour experiments in VR conditions in order to explore emotion stimulation, identification and recognition in VR.


2015
Tom Matko
Tom Matko
Supervisor:
Prof Jian Chang
Industrial Supervisor:
John Leonard, Wessex Water

Flow Visualisation of Computational Fluid Dynamics Modelling

Aeration systems have a major influence on the oxygen transfer efficiency and hydrodynamics which affect biological activated sludge treatment. Hydrodynamics in an aeration bioreactor is complex due to the presence of multiphase gas–liquid–solid flows. It is important to understand the flow patterns and bubble plume distributions for the effective design of aeration bioreactor oxidation ditches (OD). For efficient OD design grid-based computational fluid dynamics (CFD) models are a powerful tool. Emerging particle and hybrid-based numerical fluid methods for computer animation enable effective 3D flow visualisation of the hydrodynamics. Wessex Water (industrial partner of the project) recognises that CFD models are a useful numerical tool, and the visualisation of flow patterns, bubble plumes and solid biomass in aeration bioreactors can be effective for demonstrating design improvements.


2014
Asha Ward
Asha Ward
Supervisor:
Dr Tom Davis
Industrial Supervisor:
Luke Woodbury

Music Technology for users with Complex Needs

Music is essential to most of us, it can light up all areas of the brain, help develop skills with communication, help to establish identity, and allow a unique path for expression. People with complex needs can face barriers to participation with music-making and sound exploration activities when using instruments and technology aimed at typically able users. My research explores the creation of novel and bespoke hardware and software to allow accessibility to music creation for those with cognitive, physical, or sensory impairments and/or disabilities. Using tools like Arduino and sensor based hardware, alongside software such as Max/MSP and Ableton Live, the aim is to provide innovative systems that allow for the creation of personal instruments that tailor to individual needs and capabilities. These instruments can then be used to interact with sound in new ways not available with traditional acoustic instruments. Technology can be used to turn tiny movements into huge sounds and tangible user interfaces can be used to investigate the relationship between the physical and digital world, leading to new modes of interaction. Working with my industrial sponsor the Three Ways School in Bath and industrial mentor Luke Woodbury of Dotlib, my research will take use an Action Research methodology to create bespoke, tangible tools that combine hardware and software allowing central users, and those facilitating, to create and explore sound in a participatory way.

 


2014
Ieva Kazlauskaite
Ieva Kazlauskaite
Supervisor:
Dr Neill Campbell, Prof Darren Cosker
Industrial Supervisor:
Tom Waterson

ML for character animation and motion style synthenthesis

Industrial Partner

Electronic Arts (EA), Frostbite team

Research Project

Machine Learning for character animation and motion style synthesis

The project investigates the use of machine learning techniques to improve the interactrive animation of game characters based on motion capture data.

Background: Mathematics

Master's degree, Mathematics (MMath), University of Durham


2014
Mark Moseley
Mark Moseley
Supervisor:
Dr Leigh McLoughlin
Industrial Supervisor:
Sarah Gilling

My research is based within the area of Assistive Technology:

Young people who have complex physical disabilities and good cognition may face many barriers to learning, communication, personal development, physical interaction and play experiences. Physical interaction and play are known to be important components of child development, but this group currently has few suitable ways in which to participate in these activities.

Technology can help to facilitate such experiences. My research aims to develop a technology-based tool to provide this group with the potential for physical interaction and physical play, by providing a means of manipulating objects.  The tool will be used to develop the target group's knowledge of spatial concepts and the properties of objects. It will utilise eye gaze technology, robotics and haptic feedback (artificial sensation) in order to simulate physical control and sensations.

My research involves Victoria Education Centre in Poole, Dorset.


2014
Garoe Dorta Perez
Garoe Dorta Perez
Supervisor:
Dr Neill Campbell, UoB, Dr Lourdes Agapito, UCL
Industrial Supervisor:
Dr Sara Vicente, Dr Ivor Simpson

Learning models for intelligent photo editing

Industrial Partner

Anthropics Technology Ltd

Research Project

Learning models for intelligent photo editing

My main research interests lie in the areas of machine learning and computer vision. My project at Anthropics Technology Ltd. involves face modelling applications using deep neural networks (DNN), which ties in with the software produced at the company, that is centred around human beauty with a special focus on facial analytics.

The goal of the project is to develop novel computer vision and graphics technologies that enable users to intuitively edit photos to produce professional quality results. Photo editing applications need to be simple for a user to interact with, but sufficiently flexible to remove any flaws without introducing artefacts. This project will develop vision models for extracting information about the objects in the scene and graphics models to provide realistic user driven image enhancements. Image synthesis will likely be employed for generating data to train the models and for creating novel image effects.

Garoe will be presenting his research Learning models for intelligent photo editing, in a departmental seminar on 22nd November 2018, 13:15 in room 8W 2.20.


http://people.bath.ac.uk/gdp24/


2013
Tom Wrigglesworth
Tom Wrigglesworth
Supervisor:
Dr Leon Watts, Dr Simon Jones
Industrial Supervisor:
Lucy May Maxwell

Towards a Design Framework for Museum Visitor Engagement with Historical Crowdsourcing Systems

Imperial War Museum

I am researching how novice users engage with online museum collections through crowd-sourcing initiatives. My project is in collaboration with the Imperial War Museums and is primarily focused on the American Air Museum website - a large online archive of media and information that accommodates crowd-sourced contributions. My research interests are in Human-Computer Interaction, Research Through Design methodologies and encounters with cultural heritage through web-browser based technologies.


2013
Anamaria Ciucanu
Anamaria Ciucanu
Supervisor:
Prof Darren Cosker, Dr Neill Campbell
Industrial Supervisor:
Iain Gilfeather

Reconstructing / Enhancing 3D Animation of Stop Motion Character

Industrial partner

Formerly working with Fat Pebble

Research Project:

E-StopMotion: Reconstructing and Enhancing 3D Animation of Stop Motion Characters by Reverse Engineering Plasticine Deformation

Stop Motion Animation is the traditional craft of giving life to handmade models. The unique look and feel of this art form is hard to reproduce with 3D computer generated techniques. This is due to the unexpected details that appear from frame to frame and to the sometimes choppy appearance of the character movement. The artist's task can be overwhelming as he has to reshape a character into hundreds of poses to obtain just a few seconds of animation. The results of the animation are usually applied in 2D mediums like films or platform games. Character features that took a lot of effort to create thus remain unseen.

We propose a novel system that allows the creation of 3D stop motion-like animations from 3D character shapes reconstructed from multi-view images. Given two or more reconstructed shapes from key frames, our method uses a combination of virtual clay deformation, non-rigid registration and as-rigid-as-possible interpolation to generate plausible in-between shapes. This significantly reduces the artist's workload since much fewer poses are required. The reconstructed and interpolated shapes with complete 3D geometry can be manipulated even further through deformation techniques. The resulting shapes can then be used as animated characters in games or fused with 2D animation frames for enhanced stop motion films.

https://vimeo.com/289971097

Video accompanying the publication:
Anamaria Ciucanu, Naval Bhandari, Xiaokun Wu, Shridhar Ravikumar, Yong-Liang Yang, Darren Cosker. 2018. E-StopMotion: Digitizing Stop Motion for Enhanced Animation and Games. In MIG 18: Motion, Interaction and Games (MIG 18), November 8-10, 2018, Limassol, Cyprus. ACM, New York, USA, 11 pages.


2013
Rahul Dey
Rahul Dey
Supervisor:
Dr Christos Gatzidis
Industrial Supervisor:
Jason Doig

New Games Technologies

My research focuses on using real time voxelization algorithms and procedurally creating content in voxel spaces. Creating content using voxels is more intuitive than polygon modelling and possesses a number of other advantages. This research intends to provide novel methods for real time voxelization and subsequently editing them using procedural generation techniques. These methods will also be adapted for next generation consoles and take advantage of the features that they expose.


2013
Adam Boulton
Adam Boulton
Supervisor:
Dr Rachid Hourizi, Prof Eamonn O'Neill
Industrial Supervisor:
Alice Guy

The Interruption and Abandonment of Video Games

Industrial Partner:

PaperSeven

Research Project

The cost of video game development is rapidly increasing as the technological demands of producing high quality games grow ever larger. With budgets set to spiral into the hundreds of millions of dollars, and audience sizes rapidly expanding as gaming reaches new platforms, we investigate the phenomenon of task abandonment in games. Even the most critically acclaimed titles are rarely completed by even half their audience. With the cost of development so high, it is more important than ever that developers, as well as the players, get value for money. We ask why so few people are finishing their games, and investigate whether anything can be done to improve these numbers.

Background: Computer Science

BSc Computer Science, University of Cardiff


2013
Tom Smith
Tom Smith
Supervisor:
Dr Julian Padget
Industrial Supervisor:
Andrew Vidler

Procedural content generation for computer games

Industrial Partner

Ninja Theory

Research Project

Procedural content generation for computer games

Procedural content generation (PCG) is increasingly used in games to produce varied and interesting content. However PCG systems are becoming increasingly complex and tailored to specific game environments, making them difficult to reuse, and so we investigate ways to make the PCG code reusable and allow simpler, usable descriptions of the desired output. By allowing the behaviour of the generator to be specified without altering the code, we provide increasingly data-driven, modular generation. We look at reusing tools and techniques originally developed for the semantic web, and investigate the possibility of using them with industry-standard games development tools.

Background: Computer Science

Master of Engineering (MEng), Computer Science with Artificial Intelligence, University of Southampton

 


2013
Stephane Le Boeuf
Stephane Le Boeuf
Supervisor:
Dr Ian Stephenson
Industrial Supervisor:
Dr Sara C. Schvartzman

New VFX Technologies

Since the beginning of mankind, man have tried to reproduce its universe. Due to the various movies which need a realistic universe, computer scientist developed physical photorealistic rendering and plausible photorealistic rendering. I am working to solve this problem. 2 approaches will be good, find a new faster way to render physically based scene, or find a way to digitalize the real. GPU are becoming faster and faster, so I am working on a way to use it right and efficiently to produce relevant solution for the VFX industry.


2013
Fabio Turchet
Fabio Turchet
Supervisor:
Prof Alexander Pasko
Industrial Supervisor:
Dr Sara C. Schvartzman

New VFX Technologies

My research project focuses on the simulation of muskuloskeletal systems for the visual effect industry. Movies often features creatures
and digital doubles that have to look real and part of this realism comes from an anatomically correct deformation of soft tissues and skin.

Challenges in the area are represented by the complexity of the many interacting muscles present in the body that have to be simulated numerically and efficiently with methods that take into account collisions, material anisotropy, non-linearity and artistic control.


2013
Zack Lyons
Zack Lyons
Supervisor:
Dr Leon Watts
Industrial Supervisor:
Prof Nigel Harris

Virtual Therapy

Industrial Partner:

Designability / Brain Injury Rehabilition Trust

Research Project:

Virtual Therapy - A Story-Driven and Interactive Virtual Environment for Acquired Brain Injury Rehabilitation

My research involves using interactive computational simulations to deliver meaningful benefits to people with acquired brain injuries. It will contribute to the science base on human-agent interaction, as well as to research on Human-Computer Interaction in mental health. I am currently carrying out exploratory work with the intention of articulating design goals to inform future development of simulations. The envisioned emphasis of the project is in exploring the unique dynamics of the three-way interaction between clients, clinicians and the machine.

 


2013
Elena Marimon Munoz
Elena Marimon Munoz
Supervisor:
Dr Hammadi Nait-Charif
Industrial Supervisor:
Phil Marsden

Digital Radiography: Image acquisition and Image enhancement

My project is sponsored by PerkinElmer, a multinational technology corporation focused on human and environmental health and the Centre for Digital Entertainment. The project focuses on the characterization of some of the components that affects the image acquisition of a Dexela CMOS X-ray detector and in the development of scatter removal software for image post-processing in mammography applications.


2013
Richard Jones
Richard Jones
Supervisor:
Dr Richard Southern
Industrial Supervisor:
James Bird & Ian Masters

New VFX Technologies

Richard is working alongside VFX studio Double Negative to develop improvements to the liquid simulation toolset for creating turbulent liquid and whitewater effects for feature film visual effects. The current toolset for liquid simulation is built around the creation of simple single-phase liquid motion, such as ocean waves and simple splashes, but struggles to capture the often more exciting mixed air-liquid phenomena of very turbulent fluid splashes and sprays. Therefore the creation of turbulent effects relies very heavily on artistic input and having the experience and intuition to use existing tools in unorthodox ways. By incorporating more physical models for turbulent fluid phenomena into the existing liquid simulation toolset, his project aims to develop techniques to greater capture realistic turbulent fluid effects and allow faster turnover of the highly detailed liquid effects required for feature film.




© Centre for Digital Entertainment 2018. Site by MediaClash.