爱尔兰都柏林圣三一学院2021年招聘博士后职位(多模式交互)
爱尔兰都柏林圣三一学院2021年招聘博士后职位(多模式交互)
Research Fellow in Multimodal Interaction
Trinity College Dublin
Description
Post Summary
The Science Foundation Ireland ADAPT Research Centre (adaptcentre.ie), seeks to appoint a Research Fellow in Multimodal Interaction. The successful candidate will support research in online interaction in teaching scenarios, in the context of the recently funded SFI Covid 19 Rapid Response Project, RoomReader led by Prof. Naomi Harte in TCD and Prof. Ben Cowan in UCD. The candidate will be working with a team to drive research into multimodal cues of engagement in online teaching scenarios. The work involves a collaboration with Microsoft Research Cambridge, and Microsoft Ireland.
The candidate should have extensive experience in speech based interaction, and modelling approaches using deep learning with multimodal signals e.g. linguistic, audio, and visual cues. The candidate will also be responsible for supporting research in a number of areas including:
· Identifying and understanding multimodal cues of engagement in speech based interaction
· Deep learning architectures for multimodal modelling of engagement in speech interactions
· Application and evaluation of modelling approaches to the specific case of online teaching scenarios
Thus, the ideal candidate will typically have specific expertise in speech interaction, signal processing and deep learning. Reporting to a Principal Investigator, the successful candidate will work within a larger group of Postdoctoral Researchers, PhD students and Software Developers. They will have exposure to all aspects of project lifecycle, from requirements analysis to design, code, test and face-to-face demonstrations, including with our industry partners Microsoft Research and Microsoft Ireland.
The successful candidate will work alongside the best and brightest talent in speech and language technologies, and video processing in the Sigmedia Research Group on a day-to-day basis. The wider ADAPT Research centre will give exposure to a wider range of technologies including data analytics, adaptivity, personalisation, interoperability, translation, localisation and information retrieval. As a university-based research centre, ADAPT also strongly supports continuous professional development and education. In this role you will develop as an researcher, both technically and scientifically. In addition, ADAPT will support candidates to enhance their confidence, leadership skills and communication abilities.
Standard Duties and Responsibilities of the Post
· Identify and analyse research papers in online human interaction scenarios, specifically those relevant to online teaching
· Identify existing datasets suitable for baseline analysis of multimodal interaction
Support the design and capture of new multimodal data corpus (actual task is conducted by a Research Assistant on the project)
· Develop and adapt deep learning architectures to multimodal interaction scenarios, subsequently adapting the approaches to the specifics of online teaching interactions
· Liaise with engineering and HCI experts to refine and influence approaches to the project at all levels
Report regularly to the PI of the project, and interact regularly with other team members to maintain momentum in the project
· Dataset recording and subsequent editing and labelling for project deployment
Publish and present results from the project in leading journals and conferences
Funding Information
The position is funded through the SFI COVID-19 Research Call 2020.
· Person Specification
The successful candidate will have broad experience in deep learning architectures applied to speech-based interaction. The successful candidate is expected to:
· Have a thorough understanding of speech based interaction, including linguistic, verbal, non-verbal and visual cues
· Be expert in deep-learning applied to speech processing
· Be skilled at taking disparate research ideas and draw innovative conclusions or see new solutions
· Have excellent interpersonal skills
· Be highly organised in their work, with an ability to work remotely if necessary
Qualifications
Candidates appointed to this role must have a PhD in Engineering or Computer Science, or a closely related field
Knowledge & Experience
Essential
· Understanding of multimodal cues in speech based interaction
Experience of the development of deep learning architectures for speech processing
Familiarity with running of large scale experiments e.g. on a high-performance compute farm
Publication track record commensurate with career stage in high quality conferences or journals
Desirable
Familiarity with MS Teams environment
Experience in post-production tools for video editing
Mentoring of junior team members
Record of open source publishing of code
Skills & Competencies
· Excellent written and oral proficiency in English (essential)
· Good communication and interpersonal skills both written and verbal
· Proven ability to prioritise workload and work to exacting deadlines
· Flexible and adaptable in responding to stakeholder needs
Enthusiastic and structured approach to research and development
Excellent problem-solving abilities
Desire to learn about new products, technologies and keep abreast of new product technical and research developments
Sigmedia Research Group
The Signal Processing and Media Applications (aka Sigmedia) Group was founded in 1998 in Trinity College Dublin. Originally with a focus on video and image processing, the group today spans research in areas across all aspects of media – video, images, speech and audio. Prof. Naomi Harte leads the Sigmedia research endeavours in human speech communication. The group has active research in audio-visual speech recognition, evaluation of speech synthesis, multimodal cues in human conversation, and birdsong analysis. The group is interested in all aspect of human interaction, centred on speech. Much of our work is underpinned by signal processing and machine learning, but we also have researchers grounded in linguistic and psychology aspects of speech processing to keep us grounded.
更多最新博士后招收信息请关注博士后招聘网微信公众号(ID:boshihoujob)
声明:凡本网注明“来源:XXX”的文/图等稿件,本网转载出于传递更多信息及方便产业探讨之目的,并不意味着本站赞同其观点或证实其内容的真实性,文章内容仅供参考。如其他媒体、网站或个人从本网站转载使用,须保留本网站注明的“来源”,并自负版权等法律责任。作者如果不希望被转载或者联系转载等事宜,请与我们联系。邮箱:shuobojob@126.com。
微信公众号
关注硕博英才网官方微信公众号
硕博社群
- 博士交流群:32805967
- 北京硕博交流群:290718865
- 上海硕博交流群:79953811
- 天津硕博交流群:290718631
- 重庆硕博交流群:287970477
- 江苏硕博交流群:38106728
- 浙江硕博交流群:227814129
- 广东硕博交流群:227814204
- 湖北硕博交流群:326626252
- 山东硕博交流群:539554015