Dynamic Data’s cover photo
Dynamic Data

Dynamic Data

IT Services and IT Consulting

Solutions for data-driven decision makers

About us

We are a multidisciplinary team that came together to offer a comprehensive service with a rare combination of technical abilities, business knowledge, and interpersonal skills. We will help you to effectively measure your Digital marketing efforts and optimize their performance. We will also provide powerful data visualizations, which effectively communicate and help build customer loyalty.

Website
https://www.dynamicdata.com/
Industry
IT Services and IT Consulting
Company size
11-50 employees
Headquarters
Carlsbad
Type
Self-Owned
Founded
2020
Specialties
BigQuery, SQL, Google Analytics, Google Tag Manager, Google Data Studio, Tableau, Snowflake, R, P, Phyton, Stitch, and DBT

Locations

Employees at Dynamic Data

Updates

  • Data powers AI, but context makes it intelligent. Diego Prinzi and Marcos Monguillot recently represented Dynamic Data at FabCon '26 in Atlanta. Their biggest takeaway? AI doesn't fail for a lack of data, it fails for a lack of meaning. Swipe through for a quick look at the tools shaping the future of enterprise data, and read the full blog here: https://bit.ly/48fCEiU #DynamicData #FabCon26 #DataArchitecture #ArtificialIntelligence

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
      +1
  • View organization page for Dynamic Data

    389 followers

    The 2026 dbt Labs State of Analytics Engineering Report just dropped, and at Dynamic Data, one takeaway is impossible to ignore: Velocity without validity is a fast track to bad decisions. AI is making teams faster (72% now prioritize AI-assisted coding), but it’s creating a massive bottleneck. Here is where the industry is actually shifting: Trust > Speed: 83% of data teams say trust is their top priority. 71% are deeply concerned about hallucinated data reaching stakeholders. The Governance Gap: AI scales our output, but testing and validation aren't keeping up. We must treat trust as infrastructure. The Cost Squeeze: 57% report compute spend is outpacing budget growth. Efficiency is no longer optional. Data engineering is no longer just about building pipelines, it's about controlling them. What is your biggest challenge in 2026: Scaling speed, ensuring trust, or controlling costs? Read the full report here: https://lnkd.in/gNjdA-_s #AnalyticsEngineering #DataStrategy #DynamicData #DataGovernance #dbt

  • If you are wondering if you can just use AI on top of your data to ask questions and get dashboards automatically generated this is the answer: You can but, you should be willing to receive the wrong results for at least 30% of your prompts. If you need a close to 100% accuracy, a semantic model needs to be created to provide context and definitions to the LLM.

    View organization page for dbt Labs

    143,649 followers

    The 2026 dbt Semantic Layer vs. text-to-SQL benchmark is in. Spoiler: text-to-SQL got way better, but the dbt Semantic Layer still won on accuracy. Since we last ran this in 2023, frontier models have gotten dramatically better at writing SQL. So we reran it to see if the gap had closed. Here's what we found: • Text-to-SQL accuracy nearly doubled—from 33% to 64%. These models are genuinely good at SQL now. • The dbt Semantic Layer hit 98.2% with claude-sonnet-4-6 and 100% with gpt-5.3-codex. • Text-to-SQL fails silently, while the dbt Semantic Layer fails loudly; it tells you it can't answer. For KPIs, board decks, and auditors, that makes a big difference. • Better modeling helps both approaches. The benchmark is open source, so you can run it on your own data. Full benchmark data, methodology, and the open-source repo are in the blog https://lnkd.in/g79bCWAv

    • No alternative text description for this image
  • Congratulations to our CEO, Victoria Gallerano, on earning her dbt Architect Certification and taking home the prize in the dbt Labs Quest for the Vest! Victoria’s dedication to mastering the latest in data engineering is exactly what drives Dynamic Data forward every single day. Having a leader who not only guides the vision but also rolls up her sleeves to achieve architect-level certifications is truly inspiring to our entire team!

    View organization page for dbt Labs

    143,649 followers

    Help us celebrate our Winter Quest for the Vest winners from across the dbt Labs partner ecosystem 👏 These individuals went above and beyond to earn their dbt Architect certification: • Alexander Lecocq – Senior Consultant, Analytics8 | Data & Analytics ConsultancyAndré Pegoraro Neto – Analytics Engineer, Indicium AIAniruddha Aggarwal – Senior Consultant, Analytics8 | Data & Analytics ConsultancyCarlos Manuel Perales Gómez – Data Engineer, CívicaCésar CLAVÉ – Data Engineer, DevoteamDaniel Eduardo Rivero Jiménez – Analytics Engineer, SDG GroupDavid Aas Correia – Manager & Senior Consultant, Crayon ConsultingElmo Pimentel – Data Engineer, Vivanti ConsultingVictoria Gallerano – CEO, Dynamic DataMarko Laitinen – Data Engineer, RecordlyMeagan Palmer – Principal Consultant, Altis ConsultingMichael Carlone – Director, Analytics Engineering, Brooklyn Data (Velir's data studio)Oscar Siauw – Sr Consultant, SlalomSimon Hannemann – Head of Marketing Science, Hopmann Marketing Analytics GmbHTero Toropainen – Senior Consultant, TwodayThomas FRANCOIS-RAKETAMANGA – Data Engineer, SFEIRWilliam Guicheney – Principal Analytics Engineer, Aimpoint Digital The dbt Architect exam assesses your ability to design secure, scalable dbt implementations, with a focus on environment orchestration, role-based access control, integrations with other tools, and collaborative development workflows aligned with best practices. Congratulations and thank you to all of our partners who joined the quest and raised the bar for dbt best practices.

    • No alternative text description for this image
  • Is your CRM suffering from "Data Amnesia"? 🧠📉 Most sales dashboards act like Polaroids—they show you exactly what your pipeline looks like right now, but they completely forget how you got there. If a $100k deal shrinks to $50k this morning, does your reporting surface the loss, or does the total just quietly update? At Dynamic Data, we help teams move from static "State-based" reporting to dynamic "Time-series" analytics using the Modern Data Stack (dbt + Fabric/Snowflake/BigQuery). By capturing historical snapshots, you can finally answer: ✅ Pipeline Velocity: How fast are deals actually moving? ✅ Close Date Drift: How many times has that "commit" been pushed? ✅ The Bridge Report: Exactly why did the pipeline change since last Monday? Stop guessing and start forecasting. Read our latest deep dive on how to give your CRM a memory: 🔗 http://bit.ly/4sK5BMd

    • No alternative text description for this image
  • Dynamic Data reposted this

    Wrapping up an incredible week at the #FabCon2026! 📍 Marcos Monguillot and I are heading home with a clear vision of where Dynamic Data is taking our clients in 2026. Here are our highlights from today's session: 🤖 MODULAR AGENT ARCHITECTURE: The future isn’t one "giant" bot. It’s Modularity. We saw a powerful demonstration of specialized agents (Data Agents, Data Science Agents, and Loyalty Agents) working in sync. This modular approach ensures higher accuracy, easier maintenance, and the ability for a Routing Agent to orchestrate complex business logic effortlessly. 🚀 DIRECT LAKE ON ONELAKE - THE STRATEGIC PATH: We spent a lot of time today breaking down why Direct Lake is officially the "Gold Standard" for BI in Fabric. It effectively kills the legacy trade-offs between speed and scale: - vs. IMPORT: It eliminates the "latency tax." Instead of duplicating data into proprietary caches and waiting for refresh windows, Direct Lake feeds the engine directly from Delta Parquet files. You get Import-level performance with the freshness of live data. - vs. DIRECT QUERY: It removes the SQL middleman. By loading memory-resident columns directly into the Power BI engine, you avoid the heavy overhead of SQL translations and the compute strain on your source systems. - vs. DIRECT LAKE ON SQL: This is the big one—OneLake Security Integration. We can now enforce security models (RLS/CLS) natively within the storage layer. This ensures that security isn't just a "wrapper" that forces a slow fallback to SQL, but a high-performance part of the Direct Lake experience. 💡 PERFORMANCE PRO-TIPS FOR PBI: Here are some tools and practices to optimize Direct Lake: - DELTA ANALYZER: A massive new tool to identify bottlenecks. It helps ensure your files are optimized for Direct Lake by checking for V-Order, schema compatibility, and file sizes. - PARTITIONING: Partition your fact tables by the same DateKey used in your relationships to significantly reduce the compute load on your capacity. The key rule: V-Order partitioning benefits = # Reads / # Writes What a week! We’re ready to put these tools to work at Dynamic Data. 🛠️ #MicrosoftFabric #FabCon2026 #DynamicData #AI #DirectLake #PowerBI #DataEngineering #OneLake #DeltaAnalyzer

    • No alternative text description for this image
    • No alternative text description for this image
  • Dynamic Data reposted this

    Reporting live from Atlanta!📍 Marcos Monguillot and I are representing the Dynamic Data team here at #FabCon2026. The biggest theme? The evolution of FABRIC IQ. We are moving beyond just "Unifying Data" to "Empowering AI Agents" through a structured layer of semantic knowledge. Here’s what’s fueling our roadmap for the coming year: 🚀 THE RISE OF FABRIC IQ: This is the new "Intelligence" workload in Fabric. It’s not just about data; it’s about Ontology, Digital Twin Builder, and Graph. It’s the glue that allows AI to understand the meaning and relationships behind the tables, not just the raw numbers. 🤖 DATA AGENTS ARE GA: These virtual analysts are officially ready for prime time. They function as both MCP Servers and Clients, meaning they can invoke tools, fetch context, and be consumed across M365 Copilot or custom endpoints. We are looking forward to testing their reliability using the new Python SDK within Fabric Notebooks to ensure accuracy. ⚡ OPERATIONAL INTELLIGENCE: The trend is shifting from standard BI (Semantic Models) to Operational Intelligence (Ontologies). By streaming real-time data into Eventhouse and modeling it with Graph, we can use Operations Agents to actually trigger actions and support live business processes. 🔐 GOVERNANCE-FIRST AI: All these agent interactions are now logged in Purview for auditing. Security (RLS/CLS) is baked in, so "agentic" doesn't mean "unregulated." Fabric is no longer just a place to store data; it’s a platform to build thinking systems. We’re excited to bring these "Fabricated" solutions back to our clients at Dynamic Data! #MicrosoftFabric #FabCon2026 #DynamicData #AIAgents #FabricIQ #DataEngineering #DigitalTwin

    • No alternative text description for this image
    • No alternative text description for this image
  • Delivering holiday cheer (and insights) from the North Pole! 🎅✨ Our team took a quick break from building models and running queries to help Santa out in the workshop. We truly believe that great data work starts with a great team culture, and we've had a blast "engineering" some holiday magic together this year. Thank you to our clients and partners for a fantastic year. We are ready and excited to "sleigh" new challenges with you in 2026. 🛷 📊 Merry Christmas from all of us at Dynamic Data! 🎄 🎁 #CompanyCulture #DreamTeam #HappyHolidays #DataLife #Christmas2025 #DataAnalytics

    • No alternative text description for this image
  • We are looking forward to seeing Select Star great features embedded into the Snowflake environment 💪

    Today marks a big milestone for our team: Snowflake has announced its plans to acquire the Select Star platform! ⭐❄️ When I started Select Star more than five years ago, the goal was simple: make data easy to find, understand, and use. Over the years, I’ve seen how far metadata context can take an organization – from self-service analytics to automating data governance, with clear insight into which data matters and why. That context has only become more important as AI moves into the center of how teams work. What’s made me most proud lately is seeing customers make real business impact with our Ask AI, powered by the metadata context we’ve built across lineage, popularity, and semantic models. It reinforced something we believed from day one: AI only works when it truly understands the data behind it. Joining Snowflake gives us the opportunity to bring that vision to a much larger stage. We’ll be focused on bringing Select Star’s lineage and discovery capabilities into Snowflake Horizon Catalog and strengthening the metadata layer that agentic AI depends on. I’m so grateful to our team, customers, and investors who believed in this mission and supported us along the way. And I’m excited for what we’ll build together in this next chapter with Snowflake.

    • No alternative text description for this image

Similar pages

Browse jobs