Our students and projects

We currently have nearly 50 live projects with 40 companies .

Our proactive Research Engineers (REs) are involved in everything from procedural generation of content for international games companies to assistive technologies to help stroke rehabilitation via the future of interactive technologies for major broadcasters and virtual reality for naval training.

Specialist areas include:

Virtual and Augmented Reality applications, eye-tracking, automatic creation of 3D content, stop motion animation, assistive technology, brain rehabilitation, procedural content generation, real-time rendering, texture-mapping, serious games, UIX, HCI, voxels, fluid simulation, GPGPU programming, parallel computing, motion and facial capture, medical imaging, volumetric visualisation, interactive content, participatory design, the internet of things and automatic musculoskeletal simulation – among many others!

Get to know some of our REs below and see our fresh new video filmed during our CDE Winter Networking Event at the British Film Institute, London.

 


2017
Alexandros Rotsidis
Alexandros Rotsidis
Supervisor:
Prof Peter Hall; Dr Christof Lutteroth
Industrial Supervisor:
Mark Lawson

Creating an intelligent animated avatar system

Industrial Partner:

Design Central (Bath) Ltd t/a DC Activ / LEGO

Research Project:

Creating an intelligent avatar: using Augmented Reality to bring 3D models to life. The creation of 3D intelligent multi-lingual avatar system that can realistically imitate (and interact with) Shoppers (Adults), Consumers (Children), Staff (Retail) and Customers (Commercial) as users or avatars. Using different dialogue, appearance and actions based on given initial data and feedback on the environment and context in which it is placed creating ‘live’ interactivity with other avatars and users.

While store assistant avatars and virtual assistants are commonplace in present times, they act in an often scripted and unrealistic manner. These avatars are also often limited in their visual representation (ie usually humanoid).

This project is an exciting opportunity to apply technology and visual design to many different 3D objects to bring them to life to guide and help people (both individually and in groups) learn from their mistakes in a safe virtual space and make better quality decisions increasing commercial impact.

Masters Project: AR in Human Robotics

Augmented Reality used in Human Robotics Interaction, working with Dr Yongliang Yang.

Background: Computer Science

BSc (Hons) Computer Science from Southampton University; worked in the industry for 5 years as a web developer. A strong interest in Computer Graphics and Machine Learning led me to the EngD programme.


https://github.com/alexs7


2017
Valentin Miu
Valentin Miu

Research Interests: Application of neural networks

I am implementing a convolutional neural-network-based realtime 3D fluid simulator in a gaming engine, using an algorithm devised by Google. I am interested in neural networks and their applications in simulation acceleration, computer vision and artificial intelligence, in fields such as VFX, gaming, virtual reality and augmented reality. 

Industrial Partner:

If you are interested in sponsoring and supporting research in the application of neural networks please contact Mike Board Bournemouth CDE Project Manager

Background: Physics

IMSci Physics, University of Glasgow, graduating with a 1st degree. During this time I familiarized myself with compositing and 2D/3D animation, in a non-professional setting. In my first year at the CDE, I successfully completed masters-level courses in Maya, OpenGL and Houdini, and have been learning CUDA GPU programming and machine learning. 


http://miu-v.com/


2017
Thomas Williams
Thomas Williams
Supervisor:
Dr Elies Dekoninck, Dr Simon Jones, Dr Christof Lutteroth
Industrial Supervisor:
Prof Nigel Harris

AR as a cognitive prosthesis for people living with dementia

Industrial Partner:

Designability

Research Project:

Investigating the use of Augmented Reality as a cognitive prosthesis for people living with dementia

There have been considerable advances in the technology and range of applications of virtual and augmented reality environments. However, to date, there has been limited work examining design principles that would support successful adoption (Gandy 2017). Assistive technologies have been identified as a potential solution for the provision of elderly care. Such technologies have in general the capacity to enhance the quality of life and increase the level of independence among their users. 

The aim of this research project is to explore how augmented reality (AR) could be used to support those with dementia with daily living tasks and activities. This will specifically focus on those living with mild to moderate dementia and their carers. Designability have been working on task sequencing for different types of daily living tasks and have amassed considerable expertise in how to prompt people with cognitive difficulties, through a range of everyday multi-step tasks (Boyd 2015). This project would allow us to explore how AR technology could build on that expertise.

The research will involve developing new applications for use with augmented reality technology such as the Microsoft HoloLens, Samsung AR or Meta 2. These augmented reality technologies are all still in their early stages of technology maturity, however they are at the ideal stage of development to explore their application in such a unique field as assistive technology.

MSc Digital Entertainment - Masters project:

A novel gaze tracking system to improve user experience at Cultural Heritage sites, with Dr Christof Lutteroth

Background: Maths/Physics

University of Bath BSc (Hons) Mathematics and Physics Four years with placement


http://blogs.bath.ac.uk/ar-for-dementia/


2017
Michelle Wu
Michelle Wu

Industrial Partner:

If you are interested in sponsoring and supporting research on using neural networks to improve the process of using mocap data in the generation of high quality animation please contact Mike Board Bournemouth CDE Project Manager

Research Interest: Motion synthesis with Neural Networks

I am interested in Motion and Performance Capture and how Machine Learning algorithms can be applied to motion data for application in the VFX and game industries. My current research project is focused on the design of a framework for character animation synthesis from content-based motion retrieval. The project's aim is to reuse collections of human motion data, exploiting unsupervised learning for training an effective motion retireval method. It will provide animators with more control over the generation of high quality animations, using Neural Networks for motion synthesis purposes.

Background:  Computer Animation, Games and Effects

BSc Software Development for Animation, Games and Effects, Bournemouth University.

Research Assistant in Human Computer Interaction/Computer Graphics in collaboration with the Modelling Animation Games, Effects (MAGE) group within the National Centre for Computer Animation (NCCA), focusing on the development and dissemination of the SHIVA Project, a software that provides virtual sculpting tools for people with a wide range of disabilities.


2017
Kenneth Cynric Dasalla
Kenneth Cynric Dasalla
Supervisor:
Dr Christian Richardt, Dr Christof Lutteroth
Industrial Supervisor:
Jack Norris, ZubrVR

Mixed Reality Broadcast Solutions

Industrial Partner:

ZubrVR

Research Project:

Exploring the use of realtime depth-sensing camera and positional tracking technologies in video for Mixed Reality Broadcast Solutions - technologies, workflows, implications for content creation

The project aims to investigate the use of depth-sensing camera and positional tracking technologies to dynamically composite different visual content in real time for mixed-reality broadcasting applications. This could involve replacing green-screen backgrounds with dynamic virtual environments, or augmenting 3D models into a real-world video scene. A key goal of the project is to keep production costs as low as possible. The technical research will therefore be undertaken predominantly with off-the-shelf consumer hardware to ensure accessibiity.At the same time, the developed techniques also need to be integrated with existing media production techniques, equipment and approaches, including user interfaces, studio environments and content creation.

MSc Digital Entertainment - Masters Project:

Multi-View High-Dynamic-Range Video, working with Dr Christian Richardt

Background: Computer Science

BSc in Computer Science, Cardiff University specializing in Visual Computing. My research project focussed on Boosting Saliency Research on the development of a new dataset which includes multiple categorised stimuli and distortions. Fixations of multiple observers on the stimuli were recorded using an eye tracker.

 


2017
Sameh Hussain
Sameh Hussain
Supervisor:
Prof Peter Hall
Industrial Supervisor:
Andrew Vidler

Applications of artistic style transfer to computer games

Research Project:

Procedural generation

Investigations into real-time applications of style transfer incorporating inference of contextual details to produce stylistic and/or artistic post-processing effects.

Industrial Partner:

Ninja Theory

MSc Digital Entertainment - masters project: 

A parametric model for linear flames, with Prof Peter Hall

Background: Mechanical Engineering

MEng in Mechanical Engineering, University of Bath; one year placement with Airbus Space and Defence where I developed software that was used to monitor and assess manufacturing performance.


2017
Miguel Ramos Carretero
Miguel Ramos Carretero

Research Interests: Character Development

My reseach interests are on character development and range from the craft of organic rigging solutions to the creation of appealing animation and visual imagery.  I am currently working on a research project focused on expressive facial animation, using the Facial Action Coding System (FACS) as the base of my research. I intend to create artist-friendly workflows for the craft of expressive animation in both realist/realistic and stylised characters, using a mix of motion capture technology and computer animation techniques based on blendshapes.

Background: Computer Science and Computer Graphics

Research engineer at the KTH Royal Institute of Technology (Stockholm, Sweden) and at the research lab Fields of View (Bangalore, India) working on areas including procedural modelling, crowd behaviour, traffic simulation, robotic kinematics and virtual reality, using CG software tools like 3D Studio Max and Unity. I have also been developing my skills in the study and practice of traditional drawing techniques for still and life drawing. 


http://bigomay.com


2017
Rory Clark
Rory Clark
Supervisor:
Dr Feng Tian
Industrial Supervisor:
Adam Harwood

VR and AR applications for Ultrahaptics technology

Industrial Partner: Ultrahaptics

Research Project: VR and AR applications for ultrahaptics technology

Background: Games Programming

BSc Games Programming, Bournemouth University, focusing on the use and development of; games and game engines, graphical rendering, 3D modelling, and a number of programming languages. My final year dissertation consisted of research, development and testing of a virtual reality event planning simulation, utilising the HTC Vive. In the past I've created and developed projects for a multitude of systems, ranging from the web and mobile, to smart-wear devices and VR headsets.


https://rory.games


2017
Marcia Saul
Marcia Saul
Supervisor:
Dr Emili Balaguer-Ballester

Research Interests: Medical applications of computational neuro-science 

My main field of interest is computational neuroscience, brain-computer interfaces and machine learning with the use of games in applicaitions for rehabilitation and improving the quality of life for patients/persons in care.

Industrial Partner:

If you are interested in sponsoring and supporting research of machine learning and neural networks, particularly in their application in user behaviours, recognition in computer games or in the implementation of intelligent games for cognitive training please contact Mike Board Bournemouth CDE Project Manager

MRes - Masters project:

Using computational proprioception models and artificial neural networks in predictive two-dimensional wrist position methods.

Background: Psychology and Computational Neuroscience

BSc in Biology with Psychology, Royal Holloway University of London,

MSc in Computational Neuroscience & Cognitive Robotics, University of Birmingham.


2017
Victor Ceballos Inza
Victor Ceballos Inza

Research Interests: Geometry processing with deformable objects

My research areas of interest include Geometry Processing - mesh processing with deformable objects, Computer Animation and Visual Effects, and, in particular, their application in the film industry.

Masters Project: Procedural Modelling

Artists can sometimes find using procedural modelling systems not intuitive​, as these applications rely on the technical skills of the user. Incorporating differentiation of the produced geometry, we can build a system that allows a more direct manipulation of the models. We show that such a system can be built to run efficiently in real time.  This project builds on a previous dissertation carried out at UCL. We seek to improve the existing application in terms of efficiency, as well as to add new functionality, including the support of novel procedural rules and high-order differentiation. Working with Dr Yongliang Yang.

Background: Maths, AI and Computer Graphics

BSc in AI & Maths, University of Edinburgh.

MSc in Computer Graphics, Vision & Imaging, University College London.

Research Assistant, Toshiba Healthcare, Edinburgh working on the application of Computer Vision techniques to healthcare, in for example to the detection of falls in the elderly;

Universitat Politècnica de Catalunya, Barcelona  research on the analysis of colonic content for diagnosis


2016
Padraig Boulton (Paddy)
Padraig Boulton (Paddy)
Supervisor:
Prof Peter Hall
Industrial Supervisor:
Alex Jolly

Recognition of Specific Objects Regardless of Depiction

Industrial Partner:

Disney Research

Research Project:

Automatic visualisation

Recognition numbers among the most important of all open problems in Computer Vision. State of the art using neural networks is achieving truly remarkable performance when given real world images (photographs). However, with one exception, the performance of each and every mechanism for recognition falls significantly when the computer attempts to recognise objects depicted in non-photorealistic form.

This project addresses that very important literature gap by developing mechanisms able to recognise specific objects regardless of the manner on which they are depicted. It builds on state of the path which is alone in generalising uniformly across many depictions. In this case, the objects of interest are specific objects rather than visual object classes, and more particularly the objects represent visual IP as defined by the Disney corporation. Thus an object could be “Mickey Mouse”, and the task would be to detect “Mickey Mouse” photographed as a 3D model, as a human wearing a costume, as a drawing on paper, as printed on a T-shirt and so on.

MSc Digital Entertainment - masters project:  Undoing Instagram Filters

This project will create a generative adversarial network (GAN) which takes a filtered Instagram photo and synthesizes an approximation of the original photo. I am recreating state-of-the-art and evaluating the suitability of TensorFlow and Torch.

Background: Automotive Engineering

MEng Automotive Engineering, Loughborough university.  During that time I worked in motorsport aerodynamics for an industrial placement. Outside of university, my main interest is surfing (and luckily Bath is a lot closer to UK surf spots that Loughborough). 


2016
Catherine Taylor
Catherine Taylor
Supervisor:
Prof Darren Cosker, Dr Neill Campbell
Industrial Supervisor:
Eleanor Whitley

Deformable objects for virtual environments

Industrial Partner:

Marshmallow Laser Feast

Research Project:

Deformable objects for virtual environments

There are currently no solutions at market that can rapidly generate a virtual reality 'prop' from a generic object, and then render it into an interactive virtual environment, outside of a studio. A portable solution such as this would enable creation of deployable immersive experiences where users could interact with virtual representations of physical objects in real time, opening up new possibilities for applications of virtual reality technologies in entertainment, but also in sports, health and engineering sectors.

This project combines novel alogrithmic software for tracking deformable objects, interactive stereoscopic graphics for virtual reality, and an innovative configuration of existing hardware, to create the Marshmallow Laser Feast (MLF) DOVE system. The project objective is to create turn-key tools for repeatably developing unique immersive experiences and training environments. The DOVE system will enable MLF to create mixed reality experiences such as live productions, serialised apps & VR products/experiences to underpin signiticant business growth and new job creation opportunities.

Background: Maths

BSc Mathematics, Edinburgh University.  

During my degree,I wrote a dissertation on Cosmological Models and studied a variety of courses including modelling, geometry and differential equations. 


2016
Kyle Reed
Kyle Reed
Supervisor:
Prof Darren Cosker; Dr. Kwang In Kim
Industrial Supervisor:
Dr Steve Caulkin; Dr. Jessie Thompson

Improving Facial Performance Animation using Non-Linear Motion

Industrial Partner:

Cubic Motion

Research Project:

Cubic Motion is a facial tracking and animation studio, most famous for their real-time live performance capture. The aim of this research is to improve the quality of facial motion capture and animation through the development of new methods for capture and animation. These methods exploit 4D facial motion, observed through appropriate scanning techniques, which encode non-linear facial motion.

MSc Digital Entertainment - masters project:

Using convolutional neural networks (CNNs) to predict occluded facial expressions when wearing head - mounted displays (HMDs) for VR. The project involves learning a non-linear motion manifold from facial performances, to introduce to a facial tracking and animation pipeline for better quality results.  Other non-linear methods include the use of physical anatomical face models.

Other projects I've been involved in focus on facial expression including learning personalised smiles from identity and user-authoring of expressions using genetic algorithms. 

Background: Computer Science

BSc (Hons) Computer Science with Industrial Placement Year, University of Bath. Technology Industrial Placement – Nomura International, London - Lead technician for Global Corporate Technical Services Web Portal implementation.  - Developer for Java EE Web Applications including an online database mgmt. service. Including front end development (Javascript, JSP, HTML/CSS).  - Liaison for Corporate Standards and Improvement division.


2016
Lewis Ball
Lewis Ball
Supervisor:
Prof Lihua You, Prof Jian Jun Zhang
Industrial Supervisor:
Dr Mark Leadbeater, Dr Chris Jenner

Material based vehicle deformation and fracturing

I studied Physics (BSc) and Scientific Computing (MSc) at the University of Warwick. 

I am mainly interested in real-time graphics and physics simulation with applications for interactive media.  I am currently with Reflections, a Ubisoft studio at Newcastle.


2016
John Raymond Hill
John Raymond Hill
Supervisor:
Prof Wen Tang

Holovis Flight Deck Officer VR Simulation System

I've always been excited by technologies which let us exceed our biological limitations and Virtual Reality offers endless possibility to achieve this. My research interests are in bringing down the barriers for communication between our senses and virtual environments to increase what we're able to experience and accomplish in them.

I am now a second-year student at Bournemouth University after coming to this course with a BSc in Computer Science and a few years out of academia. Please feel free to get in touch.


2016
Azeem Khan
Azeem Khan
Supervisor:
Dr Tom Fincham Haines, Dr James Laird
Industrial Supervisor:
Jose Paredes, Dr Dario Sancho

Procedural gameplay flow using constraints

Industrial Partner:

Ubisoft Reflections

Research Project:

Procedural gameplay flow using constraints

This project involves using machine learning to identify what players find exciting or entertaining as they progress through a level.  This will be used to procedurally generate an unlimited number of levels, tailored to a user's playing style.

Tom Clancy's The Division is one of the most successful game launches in history, and the Reflections studio was a key collaborator on the project. Reflections also delivered the Underground DLC, within a very tight development window. The key to this success was the creation of a procedural level design tool, which took a high level script that outlined key aspects of a mission template, and generated multiple different underground dungeons that satisfied this gameplay template. The key difference to typical procedural environment generation technologies, is that the play environment is created to satisfy the needs of gameplay, rather than trying to fit gameplay into a procedurally generated world.

The system using for TCTD had many constraints, and our goal is to develop technology that will build on this concept to generate an unlimited number of missions and levels procedurally, and in an engine agnostic manner to be used for any number of games. We would like to investigate using Markov constraints, inspired by the 'flow machines' research currently being undertaken by Sony to generate music, text and more automatically in a style dictated by the training material. http://www.flow-machines.com/ (other techniques may be considered)

Masters Project:

Background: Physics

MSci Physics with Theoretical Physics, Imperial College


2015
Tom Matko
Tom Matko
Supervisor:
Prof Jian Chang
Industrial Supervisor:
John Leonard, Wessex Water

Flow Visualisation of Computational Fluid Dynamics Modelling

Aeration systems have a major influence on the oxygen transfer efficiency and hydrodynamics which affect biological activated sludge treatment. Hydrodynamics in an aeration bioreactor is complex due to the presence of multiphase gas–liquid–solid flows. It is important to understand the flow patterns and bubble plume distributions for the effective design of aeration bioreactor oxidation ditches (OD). For efficient OD design grid-based computational fluid dynamics (CFD) models are a powerful tool. Emerging particle and hybrid-based numerical fluid methods for computer animation enable effective 3D flow visualisation of the hydrodynamics. Wessex Water (industrial partner of the project) recognises that CFD models are a useful numerical tool, and the visualisation of flow patterns, bubble plumes and solid biomass in aeration bioreactors can be effective for demonstrating design improvements.


2015
Lazaros Michailidis
Lazaros Michailidis
Supervisor:
Dr. Emili Ballaguer-Balester

Neurogaming

Immersion is the psycho-cognitive state, which mediates the interaction of an individual with an activity. My research is dedicated to uncovering the brain correlates of immersion in video games with electroencephalography. It has been said that if the game has the capacity to instil immersion, it will be a significant indicator of its success. However, we have yet to determine what exactly happens while a player is immersed, whether this state truly contributes to increased performance, and what the developers can do to maintain it.

For this purpose, I have developed a custom, virtual reality game of the Tower Defence genre, designated for Playstation VR, and we will also employ Machine Learning to detect the sustenance and loss of this state.

 

The project is in collaboration with Sony Interactive Entertainment Europe


2015
Thomas Joseph Matthews
Thomas Joseph Matthews
Supervisor:
Dr Feng Tian / Prof Wen Tang
Industrial Supervisor:
Tom Dolby

Virtual Reality (VR) is a growing and powerful medium that is finding traction in a variety of praxis. My research aims to tackle the specific aim of encouraging immersive learning and knowledge retention through short-form Educational VR experiences. 

Being an early form of technology, there are still a lot of design principles that need to be formed and better understood, and there is a lack of strong academic research underpinning the design, development and evaluation of VR products, particularly those with embedded learning.

I aim to develop a framework to support Educational VR production, and critically examine a number of short-form Educational VR prototypes built with this framework.

This project is supported by and embedded within AiSolve Ltd. I am currently based in AiSolve's Luton office.

View my profile here.


http://www.aisolve.com/


2015
Ifigeneia Mavridou
Ifigeneia Mavridou
Supervisor:
Dr. Emili Ballaguer-Balester, Dr Ellen Seis, Dr Alain Renaud
Industrial Supervisor:
Dr Charles Nduka

Emotion and engagement analysis of virtual reality experiences

I am interested in the emotion stimulation and the identification of methods for emotion recognition in Virtual Reality (VR) Environments. Currently at Emteq, I am working towards enhancing human-computer interaction using emotional states as an input modality by assisting the development of a facial sensing platform that measures emotions through facial gestures and biometric responses. Emotion stimulation is related to engagement and “Presence” in games and VR. These factors can assist in the creation of immersive experiences as well as, the efficient content design of a VR product in terms of re-playability. The acquisition and analysis of physiologic signals and facial expressions play an important role in my studies towards evaluating and measuring the dimensions of affect, and it’s relation to cognitive processes such as attention and memory. For my studies I will start a sequence of user-behaviour experiments in VR conditions in order to explore emotion stimulation, identification and recognition in VR.


2015
Simone Barbieri
Simone Barbieri
Supervisor:
Xiaosong Yang, Zhidong Xiao
Industrial Supervisor:
Ben Cawthorne

During the early stages of design, the artists make sketches using paper and pencil. In fact, sketching is a natural and flexible interface to represent conceptual designs. There are several advantages in using pencil and paper:

  • there is no need, for the author, to acquire any special knowledge;
  • it is easy, for the author, to change the result;
  • the precision is not required to express an idea.

Thus, a system which involve a sketching interface requires the same advantages to be convenient, or, at least, benefits that are greater or comparable to them.

However, posing and modelling 3D characters from 2D input is a complex and open problem.

My idea is to create a system that allows the user to pose the character and, eventually, remodel each section by exploiting the character’s outline.

The proposed techniques will allow the user to draw a few simple sketches which will not only pose the character but also guide the detailed deformation on the shape flow, allowing him to draw just a partial outline of the character’s components, leaving untouched the other ones.

Find out more about Simone here.


2015
Naval Bhandari
Naval Bhandari
Supervisor:
Prof Eamonn O'Neill
Industrial Supervisor:
Simon Luck

Enhancing user interation with data using AR/MR

Industrial Partner: BMT Defence Services

Research Project:

An exploration into the enhancement of dimensionality, interactivity, and immersivity within augmented and virtual reality

This research explores whether using augmented or mixed reality status and instructional information based on geographic, positional or other data is beneficial to end users/ the organisation. The information may be based upon realistic scenarios such as initial operating procedures from documentation used by Ministry of Defence that is managed by BMT. The user is required to understand information through this mechanism, interact with it eg gesturally, and manipulate it to progress through tasks and activities. This research will explore innovations in how to consume the base data, how best to store then represent this to the user to enable hands-free interaction, how to track actions completed centrally and what devices to use to make this effective for end users. This will include rapid prototyping and evaluation of systems.


2015
Javier Dehesa
Javier Dehesa
Supervisor:
Julian Padget
Industrial Supervisor:
Andrew Vidler

Applications of deep learning to interactive entertainment

Ninja Theory

I am interested in the design and construction of intelligent systems.

I believe there is a great opportunity for machine intelligence in digital entertainment, and new technology and media such as virtual reality only open even more possibilities. My current research explores applications of machine learning to interactive digital experiences. In particular, I am investigating better ways to understand the intention of a user in an immersive virtual reality environment and elaborate believable responses. Due both to current technical limitations and the inherent nature of the medium, this is a fundamentally hard problem, but one that needs to be tackled in order to build more realistic and meaningful experiences.


2015
Joanna Tarko
Joanna Tarko
Supervisor:
Dr Christian Richardt
Industrial Supervisor:
Tim Jarvis

Graphics Insertions into Real Video for Market Research

CheckmateVR

My research interests are in applications of computer vision and machine learning techniques for visual effects, especially for camera tracking and rotoscopy. I am currently working on the camera pose estimation in six degrees of freedom with the use of different external sensors.​My research interests are in applications of computer vision and machine learning techniques for visual effects, especially for camera tracking and rotoscopy. I am currently working on the camera pose estimation in six degrees of freedom with the use of different external sensors.​


2014
Garoe Dorta Perez
Garoe Dorta Perez
Supervisor:
Dr Neill Campbell, Dr Yongliang Yang
Industrial Supervisor:
Sara Vicente, Ivor Simpson

Image-guided inverse rendering algorithms for efficient rendering and illumination

Anthropics Technology Ltd

My main research interests lie in the areas of machine learning and computer vision, my current project at Anthropics Technology Ltd.

involves face modelling applications using deep neural networks (DNN).

This ties in with the software produced at the company, that is centred around human beauty with a special focus on facial analytics.


2014
Mark Moseley
Mark Moseley
Supervisor:
Dr Leigh McLoughlin
Industrial Supervisor:
Sarah Gilling

My research is based within the area of Assistive Technology:

Young people who have complex physical disabilities and good cognition may face many barriers to learning, communication, personal development, physical interaction and play experiences. Physical interaction and play are known to be important components of child development, but this group currently has few suitable ways in which to participate in these activities.

Technology can help to facilitate such experiences. My research aims to develop a technology-based tool to provide this group with the potential for physical interaction and physical play, by providing a means of manipulating objects.  The tool will be used to develop the target group's knowledge of spatial concepts and the properties of objects. It will utilise eye gaze technology, robotics and haptic feedback (artificial sensation) in order to simulate physical control and sensations.

My research involves Victoria Education Centre in Poole, Dorset.


2014
Asha Ward
Asha Ward
Supervisor:
Dr Tom Davis
Industrial Supervisor:
Luke Woodbury

Music Technology for users with Complex Needs

Music is essential to most of us, it can light up all areas of the brain, help develop skills with communication, help to establish identity, and allow a unique path for expression. People with complex needs can face barriers to participation with music-making and sound exploration activities when using instruments and technology aimed at typically able users. My research explores the creation of novel and bespoke hardware and software to allow accessibility to music creation for those with cognitive, physical, or sensory impairments and/or disabilities. Using tools like Arduino and sensor based hardware, alongside software such as Max/MSP and Ableton Live, the aim is to provide innovative systems that allow for the creation of personal instruments that tailor to individual needs and capabilities. These instruments can then be used to interact with sound in new ways not available with traditional acoustic instruments. Technology can be used to turn tiny movements into huge sounds and tangible user interfaces can be used to investigate the relationship between the physical and digital world, leading to new modes of interaction. Working with my industrial sponsor the Three Ways School in Bath and industrial mentor Luke Woodbury of Dotlib, my research will take use an Action Research methodology to create bespoke, tangible tools that combine hardware and software allowing central users, and those facilitating, to create and explore sound in a participatory way.


2014
Ieva Kazlauskaite
Ieva Kazlauskaite
Supervisor:
Dr Neill Campbell, Prof Darren Cosker
Industrial Supervisor:
Tom Waterson

Machine Learning for character animation and motion style synthesis

EA games

My interests are in machine learning, optimisation, data reduction, character animation, interactive computer graphics and other related areas. 


2013
Anamaria Ciucanu
Anamaria Ciucanu
Supervisor:
Prof Darren Cosker, Dr Neill Campbell
Industrial Supervisor:
Iain Gilfeather

Reconstructing / Enhancing 3D Animation of Stop Motion Character

Research Project:

E-StopMotion: Reconstructing and Enhancing 3D Animation of Stop Motion Characters by Reverse Engineering Plasticine Deformation

Formerly working with Fat Pebble

Stop Motion Animation is the traditional craft of giving life to handmade models. The unique look and feel of this art form is hard to reproduce with 3D computer generated techniques. This is due to the unexpected details that appear from frame to frame and to the sometimes choppy appearance of the character movement. The artist's task can be overwhelming as he has to reshape a character into hundreds of poses to obtain just a few seconds of animation. The results of the animation are usually applied in 2D mediums like films or platform games. Character features that took a lot of effort to create thus remain unseen.

We propose a novel system that allows the creation of 3D stop motion-like animations from 3D character shapes reconstructed from multi-view images. Given two or more reconstructed shapes from key frames, our method uses a combination of virtual clay deformation, non-rigid registration and as-rigid-as-possible interpolation to generate plausible in-between shapes. This significantly reduces the artist's workload since much fewer poses are required. The reconstructed and interpolated shapes with complete 3D geometry can be manipulated even further through deformation techniques. The resulting shapes can then be used as animated characters in games or fused with 2D animation frames for enhanced stop motion films.

https://vimeo.com/289971097

Video accompanying the publication:
Anamaria Ciucanu, Naval Bhandari, Xiaokun Wu, Shridhar Ravikumar, Yong-Liang Yang, Darren Cosker. 2018. E-StopMotion: Digitizing Stop Motion for Enhanced Animation and Games. In MIG 18: Motion, Interaction and Games (MIG 18), November 8-10, 2018, Limassol, Cyprus. ACM, New York, USA, 11 pages.


2013
Rahul Dey
Rahul Dey
Supervisor:
Dr Christos Gatzidis
Industrial Supervisor:
Jason Doig

New Games Technologies

My research focuses on using real time voxelization algorithms and procedurally creating content in voxel spaces. Creating content using voxels is more intuitive than polygon modelling and possesses a number of other advantages. This research intends to provide novel methods for real time voxelization and subsequently editing them using procedural generation techniques. These methods will also be adapted for next generation consoles and take advantage of the features that they expose.


2013
Zack Lyons
Zack Lyons
Supervisor:
Dr Leon Watts
Industrial Supervisor:
Prof Nigel Harris

Virtual Therapy

Industrial Partner:

Designability / Brain Injury Rehabilition Trust

Research Project:

Virtual Therapy - A Story-Driven and Interactive Virtual Environment for Acquired Brain Injury Rehabilitation

My research involves using interactive computational simulations to deliver meaningful benefits to people with acquired brain injuries. It will contribute to the science base on human-agent interaction, as well as to research on Human-Computer Interaction in mental health. I am currently carrying out exploratory work with the intention of articulating design goals to inform future development of simulations. The envisioned emphasis of the project is in exploring the unique dynamics of the three-way interaction between clients, clinicians and the machine.


2013
Tom Smith
Tom Smith
Supervisor:
Dr Julian Padget
Industrial Supervisor:
Andrew Vidler

Procedural content generation for computer games

Ninja Theory

Procedural content generation (PCG) is increasingly used in games to produce varied and interesting content. However PCG systems are becoming increasingly complex and tailored to specific game environments, making them difficult to reuse, and so we investigate ways to make the PCG code reusable and allow simpler, usable descriptions of the desired output. By allowing the behaviour of the generator to be specified without altering the code, we provide increasingly data-driven, modular generation. We look at reusing tools and techniques originally developed for the semantic web, and investigate the possibility of using them with industry-standard games development tools.


2013
Elena Marimon Munoz
Elena Marimon Munoz
Supervisor:
Dr Hammadi Nait-Charif
Industrial Supervisor:
Phil Marsden

Digital Radiography: Image acquisition and Image enhancement

My project is sponsored by PerkinElmer, a multinational technology corporation focused on human and environmental health and the Centre for Digital Entertainment. The project focuses on the characterization of some of the components that affects the image acquisition of a Dexela CMOS X-ray detector and in the development of scatter removal software for image post-processing in mammography applications.


2013
Fabio Turchet
Fabio Turchet
Supervisor:
Prof Alexander Pasko
Industrial Supervisor:
Dr Sara C. Schvartzman

New VFX Technologies

My research project focuses on the simulation of muskuloskeletal systems for the visual effect industry. Movies often features creatures
and digital doubles that have to look real and part of this realism comes from an anatomically correct deformation of soft tissues and skin.

Challenges in the area are represented by the complexity of the many interacting muscles present in the body that have to be simulated numerically and efficiently with methods that take into account collisions, material anisotropy, non-linearity and artistic control.


2013
Stephane Le Boeuf
Stephane Le Boeuf
Supervisor:
Dr Ian Stephenson
Industrial Supervisor:
Dr Sara C. Schvartzman

New VFX Technologies

Since the beginning of mankind, man have tried to reproduce its universe. Due to the various movies which need a realistic universe, computer scientist developed physical photorealistic rendering and plausible photorealistic rendering. I am working to solve this problem. 2 approaches will be good, find a new faster way to render physically based scene, or find a way to digitalize the real. GPU are becoming faster and faster, so I am working on a way to use it right and efficiently to produce relevant solution for the VFX industry.


2013
Tom Wrigglesworth
Tom Wrigglesworth
Supervisor:
Dr Leon Watts, Dr Simon Jones
Industrial Supervisor:
Lucy May Maxwell

Towards a Design Framework for Museum Visitor Engagement with Historical Crowdsourcing Systems

Imperial War Museum

I am researching how novice users engage with online museum collections through crowd-sourcing initiatives. My project is in collaboration with the Imperial War Museums and is primarily focused on the American Air Museum website - a large online archive of media and information that accommodates crowd-sourced contributions. My research interests are in Human-Computer Interaction, Research Through Design methodologies and encounters with cultural heritage through web-browser based technologies.


2013
Richard Jones
Richard Jones
Supervisor:
Dr Richard Southern
Industrial Supervisor:
James Bird & Ian Masters

New VFX Technologies

Richard is working alongside VFX studio Double Negative to develop improvements to the liquid simulation toolset for creating turbulent liquid and whitewater effects for feature film visual effects. The current toolset for liquid simulation is built around the creation of simple single-phase liquid motion, such as ocean waves and simple splashes, but struggles to capture the often more exciting mixed air-liquid phenomena of very turbulent fluid splashes and sprays. Therefore the creation of turbulent effects relies very heavily on artistic input and having the experience and intuition to use existing tools in unorthodox ways. By incorporating more physical models for turbulent fluid phenomena into the existing liquid simulation toolset, his project aims to develop techniques to greater capture realistic turbulent fluid effects and allow faster turnover of the highly detailed liquid effects required for feature film.


2013
Adam Boulton
Adam Boulton
Supervisor:
Dr Rachid Hourizi, Prof Eamonn O'Neill
Industrial Supervisor:
Alice Guy

The Interruption and Abandonment of Video Games

Industrial Partner: PaperSeven

The cost of video game development is rapidly increasing as the technological demands of producing high quality games grow ever larger. With budgets set to spiral into the hundreds of millions of dollars, and audience sizes rapidly expanding as gaming reaches new platforms, we investigate the phenomenon of task abandonment in games. Even the most critically acclaimed titles are rarely completed by even half their audience. With the cost of development so high, it is more important than ever that developers, as well as the players, get value for money. We ask why so few people are finishing their games, and investigate whether anything can be done to improve these numbers.




© Centre for Digital Entertainment 2018. Site by MediaClash.