Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field., 5+ years of hands-on experience in applied AI, NLP, or ML engineering, with at least 2 years working directly with LLMs, RAG, or semantic search., Strong proficiency with modern ML frameworks and libraries, and experience building, deploying, and monitoring AI/ML workloads in cloud environments., Excellent problem-solving, collaboration, and communication skills, with a proven record of shipping robust AI solutions..
Key responsibilities:
Design, build, and optimize AI-powered solutions using LLMs, RAG pipelines, and semantic search.
Collaborate with product, backend, and data engineering teams to define requirements and deliver high-impact features.
Integrate external data sources to enhance LLM-based workflows and inform data ingestion pipelines.
Stay updated on emerging AI research and contribute to internal knowledge-sharing and proof-of-concept projects.
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
At LeoTech, we are passionate about building software that solves real-world problems in the Public Safety sector. Our software has been used to help the fight against continuing criminal enterprises, drug trafficking organizations, identifying financial fraud, disrupting sex and human trafficking rings and focusing on mental health matters to name a few.
Role
As an AI/NLP Engineer on our Data Science team, you will be at the forefront of leveraging Large Language Models (LLMs) and cutting-edge AI techniques to create transformative solutions for public safety and intelligence workflows.
You will apply your expertise in LLMs, Retrieval-Augmented Generation (RAG), semantic search, Agentic AI, GraphRAG, and other advanced AI solutions to develop, enhance, and deploy robust features that enable real-time decision-making for our end users.
You will work closely with product, engineering, and data science teams to translate real-world problems into scalable, production-grade solutions.
This is an individual contributor (IC) role that emphasizes technical depth, experimentation, and hands-on engineering.
You will participate in all phases of the AI solution lifecycle, from architecture and design through prototyping, implementation, evaluation, and continuous improvement.
Core Responsibilities
Design, build, and optimize AI-powered solutions using LLMs, RAG pipelines, semantic search, GraphRAG, and Agentic AI architectures.
Implement and experiment with the latest advancements in large-scale language modeling, including prompt engineering, model fine-tuning, evaluation, and monitoring.
Collaborate with product, backend, and data engineering teams to define requirements, break down complex problems, and deliver high-impact features aligned with business objectives.
Inform robust data ingestion and retrieval pipelines that power real-time and batch AI applications using open-source and proprietary tools.
Integrate external data sources (e.g., knowledge graphs, internal databases, third-party APIs) to enhance the context-awareness and capabilities of LLM-based workflows.
Evaluate and implement best practices for prompt design, model alignment, safety, and guardrails for responsible AI deployment.
Stay on top of emerging AI research and contribute to internal knowledge-sharing, tech talks, and proof-of-concept projects.
Author clean, well-documented, and testable code; participate in peer code reviews and engineering design discussions.
Proactively identify bottlenecks and propose solutions to improve system scalability, efficiency, and reliability.
What We Value
Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field.
5+ years of hands-on experience in applied AI, NLP, or ML engineering (with at least 2 years working directly with LLMs, RAG, or semantic search).
Deep familiarity with LLMs (e.g. OpenAI, Claude, Gemini), prompt engineering, and responsible deployment in production settings.
Experience designing, building, and optimizing RAG pipelines, semantic search, vector databases (e.g. ElasticSearch, Pinecone), and Agentic or multi-agent AI workflows.
Exposure to GraphRAG or graph-based knowledge retrieval techniques is a strong plus.
Strong proficiency with modern ML frameworks and libraries (e.g. LangChain, LlamaIndex, PyTorch, HuggingFace Transformers).
Ability to design APIs and scalable backend services, with hands-on experience in Python.
Experience building, deploying, and monitoring AI/ML workloads in cloud environments (AWS, Azure) using services like AWS SageMaker, AWS Bedrock, AzureAI, etc.
Familiarity with MLOps practices, CI/CD for AI, model monitoring, data versioning, and continuous integration.
Demonstrated ability to work with large, complex datasets, perform data cleaning, feature engineering, and develop scalable data pipelines.
Excellent problem-solving, collaboration, and communication skills; able to work effectively across remote and distributed teams.
Proven record of shipping robust, high-impact AI solutions, ideally in fast-paced or regulated environments.