ARCHIVED
This job listing has been archived and is no longer accepting applications.
MisuJob - AI Job Search Platform MisuJob

Distributed AI Support Engineer

GRNET S.A.

Athens, Attica, Greece Hybrid permanent

Posted: January 13, 2026

Interested in this position?

Create a free account to apply with AI-powered matching

Quick Summary

We are seeking a Distributed AI Support Engineer to join our team in Athens, Greece, where we will be hosting the DAEDALUS supercomputer and creating a Greek AI factory - Pharos with special needs for AI work.

Job Description

Why Join Us

GRNET S.A. provides Internet connectivity, high-quality e-Infrastructures and advanced services to the Greek Educational, Academic and Research community, aiming at minimizing the digital divide and at ensuring equal participation of its members in the global Society of Knowledge. GRNET provides advanced services to the following sectors: Education, Research, Health, Culture.

In 2026, GRNET is expected to host the DAEDALUS supercomputer, which is expected to rank among the Europe’s top supercomputers and will also serve the Greek AI factory - Pharos with special needs for AI workflows. DAEDALUS is based on HPE’s NVIDIA GH200 direct liquid-cooled architecture, designed for about 89 petaflops sustained (115 petaflops peak) for traditional HPC, AI and Big Data/HPDA workloads across CPU and GPU-accelerated partitions backed by 1 PB of high-performance NVMe and 10 PB of usable storage.

As a Distributed AI Support Engineer, you will help researchers, startups, and industry teams turn this cutting-edge infrastructure into real-world AI breakthroughs, working alongside leading European universities, supercomputing centres, and industrial partners in the broader EuroHPC ecosystem. More specifically, you will contribute to the following focus areas. You are not expected to know all the technologies listed below. We are looking for strong AI and Python programming skills, solid fundamentals, and motivation to learn the necessary tools and workflows.

Focus Areas

1. User support and operations

Provide first-line support for AI on HPC workloads (LLM, computer vision and other GPU-accelerated workloads): ticket triage, quick diagnosis of failed runs, escalation when hardware issues are suspected. Support users in writing/reviewing/debugging Slurm job scripts launching multi-GPU/multi-node jobs via torchrun, accelerate launch or deepspeed, and support Ray/DeepSpeed and vLLM inference workflows where appropriate.

2. AI/LLM software stacks and containers

Maintain and test shared AI/LLM and computer-vision stacks for HPC and Cloud (PyTorch, DDP/FSDP, Hugging Face Transformers & Accelerate, PEFT/LoRA, Unsloth, DeepSpeed, Bitsandbytes, TensorFlow, RAPIDS, Ray, vLLM and related tooling), ensuring compatibility with NVIDIA drivers, CUDA and NCCL. Design, publish and support recommended Apptainer/Singularity containers (including NGCbased images) for training, fine-tuning, inference and RAG.

3. Debugging, diagnostics and performance

Diagnose common AI/LLM failures (CUDA errors, NCCL timeouts, GPU OOM, distributed hangs, misconfigured environment). Validate driver/CUDA/NCCL stacks and profile/tune workloads using PyTorch Profiler, NVIDIA Nsight (Systems/Compute), TensorBoard, MLflow and Weights & Biases (WandB).

4. Distributed training, quantisation and inference

Guide users on scalable distributed training with PyTorch DDP/FSDP and DeepSpeed (ZeRO/pipeline/tensor parallelism), plus Ray and higher-level frameworks (PyTorch Lightning, Hydra), mapped to node/GPU topology. Support 8-bit/4-bit quantisation and QLoRA workflows (Unsloth, Bitsandbytes) and large-scale inference frameworks (vLLM, NVIDIA TensorRT-LLM, Triton Inference Server); contribute to AI/LLM and computer-vision benchmarking.

5. Data, storage and I/O

Advise on effective storage use for tokenised datasets, vector indices, checkpoints and logs (layout, sharding, cleanup). Troubleshoot dataloader/I/O bottlenecks and recommend suitable formats and caching/staging, including use of NVIDIA DALI, WebDataset, RAPIDS and Dask where appropriate.

6. Monitoring, evaluation and governance

Monitor AI/LLM usage metrics (GPU hours, job success rates, queue waiting times, typical model sizes/frameworks) to drive improvements in stacks, docs and training. Support Access Call evaluation via technical review of AI/LLM proposals and resource feasibility checks.

7. Documentation, training and community building

Develop and maintain task-oriented documentation and cookbooks for AI/LLM workflows on HPC and Cloud. Prepare hands-on tutorials/demos (PyTorch, TensorFlow, Hugging Face Transformers, vLLM, Ray/DeepSpeed, RAPIDS, JupyterLab/ TensorBoard/ MLflow).

8. Reporting, deliverables and outreach

Prepare technical reports on trainings offered; maintain dashboards/databases for trainings, KPIs and survey data. Prepare web content (news, training/service pages), coordinate announcements (newsletters, social media), and support stakeholders and user access processes.

Key Technologies and Tools

• Frameworks and libraries: PyTorch, DDP, FSDP, Hugging Face Transformers, Accelerate, PEFT/LoRA, Unsloth, DeepSpeed (ZeRO, pipeline, tensor parallelism), Bitsandbytes, QLoRA, torchvision and other common computervision libraries; TensorFlow; vLLM; Ray Train; Hugging Face Datasets, SentencePiece, FAISS (faissgpu), Gradio, and supporting Python libraries such as SciPy, Matplotlib and Optimum.

• Launchers and schedulers: torchrun, accelerate launch, deepspeed, Slurm or similar HPC schedulers, including typical srun / salloc multi-node launch patterns and Ray-based multi-node launchers.

•Profiling and debugging: PyTorch Profiler, NVIDIA Nsight Systems/Compute, CUDA tools, NCCL debugging, TensorBoard, MLflow, Weights & Biases (WandB), and HPC debuggers and profilers.

• Containers: Apptainer, (Singularity) for image creation and migration and Apptainer-based container workflows.


Requirements:
Required Qualifications

• Degree in Computer Science, Engineering or a related STEM field. Applications from graduating students and recent graduates will be considered.

• Strong programming skills in Python and experience with AI frameworks and libraries (e.g. PyTorch, TensorFlow, Hugging Face Transformers, vLLM, Ray, etc.).

• Hands-on experience training or fine-tuning models on GPUs using PyTorch and related tooling (e.g. torchrun, DDP).

• Ability to communicate technical concepts clearly to researchers and industry users, both in writing (documentation) and in person (training, support).

Desirable Qualifications

• Familiarity with GPU architectures and concepts relevant to AI on HPC.

• Experience with LLM or foundation model training/fine-tuning, distributed training frameworks (FSDP, DeepSpeed) and quantisation methods (8-bit/4-bit, QLoRA, PEFT/LoRA, Bitsandbytes, Unsloth).

• Experience with profiling and monitoring tools (PyTorch Profiler, NVIDIA Nsight Systems/Compute, cluster monitoring stacks).

• Experience building or maintaining containerised environments for GPU workloads (Apptainer/Singularity) in an HPC context.

• Prior involvement in user support for HPC or research computing centres, including documentation, training and best-practice guides.


Benefits:
GRNET provides a creative, dynamic and challenging working environment, that encourages team spirit, cooperation and continuous learning of state-of-the-art technology.

• Opportunities for International collaborations

• Competitive remuneration package

• Opportunities for professional development

• Modern, friendly and innovative working environment

GRNET is an equal opportunity employer that is committed to diversity and inclusion in the workplace. People with a diverse range of backgrounds are encouraged to apply. We do not discriminate against any person based upon their race, age, color, gender identity and expression, disability, national origin, medical conditions, religion, parental status, or any other characteristics protected by law.

All applications will be treated with strict confidentiality.

Why Apply Through MisuJob?

AI-Powered Job Matching: MisuJob uses advanced artificial intelligence to analyze your skills, experience, and career goals. Our matching algorithm compares your profile against thousands of job requirements to find positions where you have the highest chance of success. This saves you hours of manual job searching and ensures you only see relevant opportunities.

One-Click Applications: Once you create your profile, applying to jobs is effortless. Your resume and cover letter are automatically tailored to highlight the most relevant experience for each position. You can apply to multiple jobs in minutes, not hours.

Career Intelligence: Beyond job matching, MisuJob provides valuable career insights. See how your skills compare to market demands, identify skill gaps to address, and understand salary benchmarks for your experience level. Make data-driven decisions about your career path.

Frequently Asked Questions

How do I apply for this position?

Click the "Register to Apply" button above to create a free MisuJob account. Once registered, you can apply with one click and track your application status in your dashboard.

Is MisuJob free for job seekers?

Yes, MisuJob is completely free for job seekers. Create your profile, get matched with jobs, and apply without any cost. We help you find your dream job without any hidden fees.

How does AI matching work?

Our AI analyzes your resume, skills, and experience to understand your professional profile. It then compares this against job requirements using natural language processing to calculate a match percentage. Higher matches mean better fit for the role.

Can I apply to jobs in other countries?

Absolutely. MisuJob features jobs from companies worldwide, including remote positions. Filter by location or look for remote opportunities to find jobs that match your preferences.

Ready to Apply?

Join thousands of job seekers using MisuJob's AI to find and apply to their dream jobs automatically.

Register to Apply