Academic Jobs Logo
Post My Job Jobs

Constraint-Aware 3D Scenario Generation for Agentic Digital Twins in Smart Manufacturing

Applications Close:

Post My Job

Aston Campus in Birmingham, UK

Academic Connect
5 Star Employer Ranking

Constraint-Aware 3D Scenario Generation for Agentic Digital Twins in Smart Manufacturing

About the Project

Project Details

This project addresses a key challenge in smart manufacturing: how to generate realistic, editable, and engineering-aware industrial 3D scenarios from natural-language requirements and iterative user feedback. The vision is to support agentic and interactive digital twins that are not only visually plausible, but also operationally meaningful for downstream applications such as robotic simulation, training, planning, embodied AI, human–robot collaboration, workflow testing, factory reconfiguration, and virtual commissioning.

The research will tackle four interconnected problems. First, it will investigate how natural-language industrial requirements can be translated into a structured, interpretable, and verifiable scene representation that captures objects, spatial relations, functional zones, workflow logic, and engineering constraints. Second, it will address how iterative user feedback can be supported through localised, controllable scene edits while preserving global spatial consistency, design intent, and constraint satisfaction. Third, it will study how to maintain consistency between 2D layout planning and 3D scene generation, so that spatial configurations, semantic relations, and engineering rules remain aligned across modalities. Fourth, it will evaluate whether generated scenes are realistic, constraint-compliant, and operationally useful for applications including robotic planning and training, embodied AI interaction, simulation-based optimisation, and manufacturing decision support.

To address these challenges, the project will develop four methodological components. The first is a domain-adapted scene planner, distilled from a strong LLM-based foundation model and augmented with industrial knowledge. This planner will map natural-language requirements into a hierarchical industrial scene representation spanning object, zone, and workflow levels, together with an initial 2D layout. A constraint completion module will infer implicit engineering rules, such as robot safety buffers, equipment clearance, minimum aisle widths, and access constraints.

The second component is a two-stage local refinement engine for 2D layout editing. In Stage A, a subgraph-based local proposal mechanism will generate targeted updates in response to user feedback or revised requirements. In Stage B, a constraint repair module will resolve collisions, clearance violations, blocked access paths, and other local inconsistencies, while preserving key global properties of the scene. This will allow iterative and controllable editing without destabilising the overall design.

The third component is a cross-modal generation framework built around a canonical scene state that links the validated 2D layout to the generated 3D scene. This will include a 2D-to-3D instantiation module, a cross-modal consistency objective, and a round-trip validation strategy to ensure that semantics, geometry, and engineering constraints remain coherent between representations. The aim is to create a trustworthy 2D-to-3D pipeline suitable for industrial deployment rather than purely visual generation.

The fourth component is a multi-level evaluation framework. This will combine expert visual scoring, geometric comparison against reference layouts, rule-based engineering checks, and downstream case-study testing in robotics simulation. These evaluations will assess scene fidelity, functional feasibility, safety compliance, and usefulness for planning, training, and interaction tasks.

The expected contributions are: a constraint-aware framework for industrial scene generation; a structured and verifiable representation of industrial design intent; a trustworthy and editable 2D-to-3D generation pipeline; and a practical evaluation protocol for robotics, embodied AI, and broader smart manufacturing applications. More broadly, the project will help bridge generative AI, digital twins, and industrial intelligence, enabling reusable virtual environments for training, validation, optimisation, and future autonomous manufacturing systems.

Person Specification

Candidates should have been awarded, or expect to achieve, EITHER:

a] a First or Upper Second Class award in their undergraduate degree, in a relevant subject.

OR

b] a First or Upper Second Class award in their undergraduate degree, and a Merit or Distinction in a Masters degree, both in a relevant subject.

Qualifications from overseas institutions will be considered, but performance must be equivalent to that described above, and the University reserves the right to ascertain this equivalence according to its own criteria.

Desirable / Essential Skills or Experience

  • A background in Computer Science, Artificial Intelligence, Robotics, Machine Learning, Computer Vision, or a closely related discipline.
  • Good Python programming skills
  • Familiarity with at least some of the following: deep learning, generative models, 3D vision/graphics, scene understanding, robotics, or spatial reasoning.
  • Interest in smart manufacturing or industrial digital twins

Submitting an application

We can only consider applications that are complete and have all supporting documents. Applications that do not provide all the relevant documents will be automatically rejected.Your application must include:

  1. English language copies of the transcripts and certificates for all your higher education degrees, including any Bachelor degrees.
  2. A Research Statement detailing your understanding of the research area, how you would approach the project, and a brief review of relevant literature. Be sure to use the title of the research project you are applying for. There is no set format or word count.
  3. A personal statement which outlines any further information which you think is relevant to your application, such as your personal suitability for research, career aspirations, possible future research interests, and further description of relevant employment experience.
  4. A Curriculum Vitae (Resume) which details your education and work history.
  5. Two academic refereeswho can discuss your suitability for independent research. References must be on headed paper, signed and dated no more than 2 years old. At least one reference should be from your most recent University. You can submit your references at a later date if necessary.
  6. Evidence that you meet the English Language requirements. If you do not currently meet the language requirements, you can submit this at a later stage.
  7. A copy of your passport. Where relevant, include evidence of settled or pre-settled status.

Contact Information

For enquiries about this project, contact Dr Dan Dai at d.dai@aston.ac.uk.

Location

This position will be based on the Aston Campus in Birmingham, UK. The successful candidate will need to be located within a reasonable distance of the campus, and will be expected to visit in person regularly.

Interviews

Interviews will be conducted online via Microsoft Teams. If you are shortlisted, you will be contacted directly with details of the interview.

Funding Notes

This project is open to Home students ONLY, covers all tuition fees and includes a stipend at current UKRI rates. The project also includes a generous Research Training and Support Grant.

Please note that the successful candidate will be responsible for any expenses related to moving to Birmingham and/or visiting the Aston campus.

10

Unlock this job opportunity


View more options below

View full job details

See the complete job description, requirements, and application process

10 Jobs Found
View More