Job Type: Permanent
What will your typical day look like?
You will join the AI Factory – our AI software development team. We coordinate and synthesize the powers of Omnia AI to create unique and targeted products that optimize the way a business works. Combining machine - learning capabilities with deep business and industry acumen allows us to solve complex problems and build tangible, enterprising solutions. The AI Factory is a creation engine that is continually learning, evolving and driving change. It’s built into our DNA and it informs our approach to product design.
Your role will be to work with developers, data scientists and other DevOps professionals to design and provision infrastructure and deploy software and AI/ML models using modern DevOps principles – both in on-premise and in cloud computing environments. Successful candidates will have supported production software deployments (on-premise and/or on cloud) and have a passion for automation and repeatability.
About the team
Omnia Artificial Intelligence practice is comprised of specialized experts with hands-on experience, and cutting-edge information products that facilitate successful Artificial Intelligence (AI) transformations. We develop AI-enabled solutions to address all aspects of a client’s transformative journey with disciplined focus on business outcomes.
What you will bring
- 2-5 + years in an infrastructure and/or DevOps role
- BS or MS in computer science, or equivalent
- Strong working knowledge of Kubernetes – installation, maintenance and operational
- Hosted Kubernetes on AWS, Azure or GCP
- Installing and maintaining on-premise Kubernetes infrastructure
- Coding experience in one of Python, JavaScript or Java
- Working knowledge of configuration management, continuous integration & delivery, and/or infrastructure-as-code tools
- Strong experience of automating manual steps to ensure continuous and repeatable processes
- Proven record of following continuous integration and delivery best practices
- Expertise in GIT and Jenkins pipelines
- Working knowledge of automation tools such as Ansible, Chef, Puppet, Terraform, etc.
- Experience on setting up monitoring infrastructure for bare-metal deployments as well as K8s
- Understanding of common networking principles and technologies (domain name systems, load balancers, reverse proxies, firewalls, etc.)
- Understanding of common cloud computing concepts, including virtual machines, virtual networks, autoscaling, serverless computing, and identity & access management
- Understanding of cloud computing and on-premise security principles and best practices
- Common Linux server distributions (e.g. Redhat, Ubuntu, etc.)
'IT Job > Interview' 카테고리의 다른 글
[KOHO] Senior DevOps Engineer (0) | 2021.03.16 |
---|---|
[Informatica]Lead Software Engineer - Cloud Product Operations (0) | 2021.03.10 |
[RBC] DevOps Engineer (0) | 2021.03.07 |
[Thomson Reuters] Sr. DevOps Engineer (0) | 2021.03.02 |
[RBC Contractor] Sr. DevOps Engineer (0) | 2021.01.23 |