Roboflow’s cover photo
Roboflow

Roboflow

Software Development

Used by over 1 million engineers to deploy computer vision applications.

About us

Roboflow creates software-as-a-service products to make building with computer vision easy. Over 1,000,000 developers use Roboflow to manage image data, annotate and label datasets, apply preprocessing and augmentations, convert annotation file formats, train a computer vision model in one-click, and deploy models via API or to the edge. https://roboflow.com

Website
https://roboflow.com
Industry
Software Development
Company size
51-200 employees
Headquarters
Remote
Type
Privately Held

Locations

Employees at Roboflow

Updates

  • We are working on something new and Asfandiyar Khan built this application in 10 minutes. It used to take ~10 days to deliver a car counting project from scratch (data acquisition, labeling, training, application logic).

    Wanted to validate a quick idea so I prompted our new Agent and it built this for me from scratch in 10 minutes (left and right lanes are marked from driver's pov). Three months ago, if you told me I’d be able to do this so quickly, I’d politely call you a liar. A lot of production-deployed CV work has always been plumbing between pieces. We're now at a point where that layer gets easier, and teams get more room to focus on the parts that determine whether a model performs exceptionally well in production: data quality, training configs, and deployment. more coming soon on this front ... stay tuned!

  • Claude Code + Roboflow is something we are seeing more and more for building vision applications. Agentic coding paired with purpose-built infra increases speed. When an agent is handling boilerplate code and python scripts, you get to focus on applying your domain expertise to your application. Huge props to Alexander Britton for sharing his workflow and building in the open!

    Made a quick video showing part of the workflow behind a baseball computer vision project I've been working on in my free time Driveline Baseball Enterprises, Inc. I walk through how I used Roboflow as one part of the pipeline for uploading images, organizing and versioning the dataset, and then carrying that into our own training and post-processing workflow. The point of the video isn’t really the tooling by itself, it’s more about showing how the whole process connects. Going from raw image data, to dataset management, to training, to the final visual output is something that never really gets much love. I think a lot of people only ever see the finished result, but the interesting part is how all the pieces actually tie together to make that result possible. In this case, that meant turning labeled footage into a system that could eventually play a role in our broader tech stack. Cool project to be a part of. 😎 #ComputerVision #SportsTech #BaseballTech #MachineLearning #Python #DataPipeline #MLOps #AI #Baseball

  • Exciting to see Nicolai trying RF-DETR. He tests and reviews tons of models so for him to say "its the best mask around the objects I have seen in such a fast model" really means a lot! Our recent 1.6 release added Composable Lightning Training which increased training speed by ~30%. https://lnkd.in/eEBrkV2g

    You can also run Segmentation with the full Open-source RFDETR model from Roboflow in real-time 🔥🔥 A few lines of code and you have the model running and its the best mask around the objects I have seen in such a fast model even for the Nano version. Its based on the transformer architecture so you get the awesome level of detail and also very fast inference. from rfdetr import RFDETRSegMedium model = RFDETRSegMedium() detections = model.predict(frame_rgb)

  • AI is moving so fast. The CIO of a 1,000+ person company is using Lovable and Roboflow to automate 600 hours per month of work with vision AI in an industrial laboratory setting. Rodrigo Silva created a step by step guide for you to see how he did it: https://lnkd.in/eBepFMc7

    A Roboflow é uma plataforma de visão computacional que permite organizar bases de imagens, anotar objetos, treinar modelos e realizar deploy de inferência via API ou edge. Quero compartilhar nosso artigo publicado no blog da Roboflow, que apresenta um case real de automação da contagem de conídios com o uso de visão computacional. No artigo, mostramos como uma atividade manual, repetitiva e sujeita a variações foi transformada em um processo digital, padronizado e muito mais escalável, com o apoio do Roboflow e do Lovable para otimizar a rotina laboratorial e tornar a tomada de decisão mais ágil. Os resultados vão além da eficiência operacional. O case destaca ganhos em padronização, rastreabilidade, redução da subjetividade e liberação das equipes para atividades de maior valor agregado. Para profissionais que atuam com qualidade, laboratório, indústria, inovação, dados ou inteligência artificial aplicada aos negócios, a leitura é bastante recomendada. https://lnkd.in/dEwYBZFk

    • No alternative text description for this image
  • Roboflow reposted this

    View profile for Sylvie Goldner

    Roboflow3K followers

    Physical AI improves quality control. We've all seen the news stories... Pfizer recalled 1M packs of birth control pills due to incorrect packaging. Friendly's shipped Cookies & Cream in Vanilla Bean packaging. Coca-Cola recalled 13,000+ cases of "Zero Sugar" lemonade that was actually full sugar. Same root cause every time: wrong label, wrong package. Computer vision catches label mismatches before a single unit leaves the line.

  • Roboflow reposted this

    View profile for Joseph Nelson

    Roboflow7K followers

    In AI, we all get by with a little help from our friend, Jensen Huang 🤝 At NVIDIA GTC this year, we'll be telling the inside story of how to work with NVIDIA to win markets like vision and physical AI. That includes how we've attracted over half the Fortune 100 in critical industries like manufacturing, supported millions of developers, and published SOTA model architectures like RF-DETR. We can't do this ourselves. Our friends at NVIDIA have been critical to the compute, inference optimization, and distribution to enable everyone to benefit from visual AI. If you miss the session, catch us at booth 1637 in the main exhibit hall. We've got live demos, consultation with engineers, and swag (+treats) till supplies run out. Alyss Noland #VisionAI #PhysicalAI #NVIDIAGTC

    • No alternative text description for this image
  • Physical and embodied AI are the next big wave. This webinar covers how robotics teams are using vision to get an edge over traditional LIDAR-centric navigation. LIDAR has been the gold standard for years. It’s reliable, it’s functional, and it gets a robot from Point A to Point B. But as global supply chains become more complex and our facilities become more crowded, visual understanding is required to create truly autonomous systems. Vision AI unlocks autonomy because when a robot can see, it understands dynamic environments. This shift is powering the next generation of factory automation. Moving robots out of structured environments and onto the floor alongside human workers. Join the live session to learn more about vision enabled robots.

    Next week, join me and Vishrut Kaushik of Peer Robotics for a live conversation about integrating computer vision with industrial robots. We'll check out their automated movement systems, how visual intelligence unlocks additional capabilities for them, and lessons learned by Vishrut's team along the way. Register here: https://luma.com/kj3h0mwv

  • RF-DETR unlocking human motion systems thanks to Apache 2.0 license and improved performance over YOLO models Incredible open repo in this post from Saif K. for you to get started with and build on top of. Open source for the win

    View profile for Saif K.

    Human motion tracking systems typically follow a top-down pipeline (detect → crop → estimate). In practice, especially in lab setups or tools like #freemocap and #pose2sim, this means a person detector runs independently, and the pose model operates on fixed-resolution crops. There is a licensing problem here. Since YOLOv5, Ultralytics models use AGPL-3.0. If you build proprietary commercial software (e.g., motion tracking SaaS), you must either open-source your system or purchase an enterprise license. For that reason, many open-source pipelines (#rtmlib, #sports2d, #pose2sim) still rely on YOLOX (2021), the last Apache-2.0 YOLO variant. After ~1.5 years of using YOLOX in motion tracking setups, I’ve found it poorly suited for high-quality lab tracking. It is highly sensitive to object orientation and produces significant frame-to-frame box jitter, even for nearly static subjects. That instability propagates to pose outputs. You can smooth it offline or add causal filters online, but then you introduce lag. For real-time use cases (e.g., VR animation), that trade-off is undesirable. A better alternative is now available: RF-DETR (ICLR 2026, Apache-2.0) by Roboflow. In my side-by-side comparisons against YOLOX, it is noticeably more stable in low-motion scenes, with far less bounding box wobble. It also avoids NMS, eliminating manual tuning and associated false positives. While YOLOX can be faster, detector stability often matters more for downstream pose quality than raw FPS. To make adoption easier, I built #OpenDetect: a minimal wrapper around RF-DETR (and YOLOX) using ONNX Runtime, with CUDA, TensorRT, and Apple acceleration supported out of the box. While I focus on person class for pose estimation, detection for all COCO classes is supported. Apache-2.0 license, free for commerical use. No obligation to make your code public. 🚀 GitHub: https://lnkd.in/dFwwg4e5 🤓 Docs: https://lnkd.in/d67qCQKj If your subject is mostly static and your keypoints still jitter, improve the detector first. A stable detector can clean up your entire pose pipeline without modifying the pose model. #computervision #poseestimation #opensource

Similar pages

Browse jobs

Funding

Roboflow 6 total rounds

Last Round

Series B

US$ 40.0M

See more info on crunchbase