Are you aware of the hidden costs in your product's raw material? : : Accurately calculating raw material costs is a cornerstone of should-cost modeling. By effectively identifying the materials required, determining the cost per unit, and accounting for potential waste and additional costs like handling and transportation, you can develop a comprehensive and reliable cost model. Key Parameters for Should Cost Process in Material Calculation: # Raw Material Identification: · Material type and grade · Material source/origin # Material Quantity: · Required quantity (per unit or batch) · Packaging units # Material Cost per Unit: · Supplier quotes · Market prices · Historical data · Discounts and bulk pricing # Material Waste or Loss: · Scrap/waste factor · Defects and rejections # Handling and Storage Costs: · Material handling · Storage costs (rent, insurance, utilities) · Inventory management # Freight and Transportation: · Shipping costs · Delivery method (air, sea, road) · Customs and tariffs # Lead Time and Order Frequency: · Lead time variations · Order volume # Supplier Terms and Conditions: · Payment terms · Return and warranty policies · Exchange Rates (For Imported Materials) # Material Substitution and Alternatives: · Substitute materials · Material optimization # Environmental and Regulatory Factors: · Recycling or sustainability initiatives · Regulatory compliance # Operational Overheads Related to Materials: · Processing costs · Energy costs ------------------------------------------------------------------------------------- # Ask Yourself: -> Did you consider the net weight and gross weight calculation properly? -> Did you consider scrap weight and scrap cost in your estimation? -> Do you have access to the global raw material index and recent material price database? -> Have you asked your supplier about the raw material cost per kg as well as the scrap cost per kg? -> Do you consider Manufacturing overhead (MOH) and inventory cost (raw materials)? -> What about the scrap cost percentage based on different commodities? -> Did you optimize material through strip layout, nesting, cavity, and other techniques? -> What’s your strategy when the supplier asks for material cost increases due to market fluctuations? -> Did you consider the volume/batch/MOQ impact, as well as regional cost impact, in your calculations? -> Did you consider any coating and primary requirements in the raw material stage? -> Commodity-Specific Considerations, etc.
Engineering Product Development Stages
Explore top LinkedIn content from expert professionals.
-
-
Many organizations approach innovation the way they approach budgeting or operations. They create roadmaps, timelines, and committees designed to produce breakthroughs on schedule. But the history of technology suggests something different. Most meaningful innovations do not arrive neatly on a calendar. They appear unexpectedly. A new idea. A technical breakthrough. A surprising connection between two things that previously seemed unrelated. The real challenge is making sure your organization is ready when those moments appear. The companies and institutions that consistently innovate tend to invest early in talent and technical capability. They build cultures where experimentation is encouraged and where people are willing to test new ideas. They maintain the flexibility to pursue unexpected opportunities and move quickly when promising ideas appear. Innovation rarely begins as a fully formed plan. More often it begins as a possibility that only a few people recognize at first. The advantage goes to the organizations that have prepared themselves to recognize that moment and act on it. You may not be able to schedule inspiration. But you can build teams, systems, and cultures that are ready when it shows up. #SchmidtSights
-
Most analytics projects fail before a single query gets written. The reason? Missing things during the requirement gathering phase. I've rebuilt reports from scratch because the original ask got lost in translation. Delivered something technically correct that solved the wrong problem. It happens when your notes are just a wall of bullet points with no structure. Here's the system I now run inside Granola on every project intake call. Granola pulls the meeting from my calendar and transcribes everything silently. I focus on the conversation and keep only three things in my notepad: 𝟭. 𝗧𝗵𝗲 𝗿𝗲𝗮𝗹 𝗮𝘀𝗸 Not what they said, what they actually need. Sometimes you need to ask "why" several times to get here. 𝟮. 𝗦𝗰𝗼𝗽𝗲 𝗯𝗼𝘂𝗻𝗱𝗮𝗿𝘆 What we explicitly agreed is NOT included. Define what they need as well as what they don't need. Both are equally important. 𝟯. 𝗢𝗽𝗲𝗻 𝗾𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀 Anything still ambiguous that will bite me later. Ask as many questions as you need to get as clear as possible. When the call ends, Granola builds the summary using the transcript but anchors it to my notes. So the output is organized, not just a dump of everything that was said. I copy the scope boundary and open questions straight into my project doc. Requirements don't get fuzzy. Scope creep has less room to start. And when a stakeholder comes back 6 weeks later requesting big changes, I have a clean record of exactly what was decided. If you run intake calls, discovery sessions, or any meeting where the output is a set of requirements, try this framework. Check out Granola here: https://lnkd.in/eKhy8Sss _____ ♻️ Repost if this was helpful ✌️ Hi, I'm Matt. Follow for more data, AI, and career tips. #granolapartner
-
I've spent over 4,000 hours in stakeholder requirement-gathering meetings! Save hours of your life by asking these questions: 1. What do they plan to use the data for? 1. What initiative are they working on? 2. How will this initiative impact the business? 3. Is this for reporting or optimizing existing workflows? Understanding the purpose of the data helps you define its impact. 2. How do they plan to use the data? Will they access it via SQL, BI tools, APIs, or another method? 1. Do they have a workflow to pull data from your dataset? 2. Do they just do a `SELECT *` from your dataset? 3. Do they perform further computations on your dataset? This determines the schema, partitions, and data accessibility needs. 3. Is this data already present in another report/UI? 1. Is this data already available in another location? 2. Do they have parts of this data (e.g., a few required columns) elsewhere? Ensuring you're not recreating work saves time and avoids redundancy. 4. How frequently do they need this data? 1. How frequently does the data actually need to be refreshed? 2. Can it be monthly, weekly, daily, or hourly? 3. Is the upstream data changing fast enough to justify the required latency? Understanding frequency helps you determine the pipeline schedule. 5. What are the key metrics they monitor in this dataset? 1. Define variance checks for these metrics. 2. Do these metrics need to be 100% accurate (e.g., revenue) or directionally correct (e.g., impressions)? 3. How do these metrics tie into company-level KPIs? Memorize average values for these metrics; they’re invaluable during debugging and discussions. 6. What will each row in the dataset represent? 1. What should each row represent in the dataset? 2. Ensure one consistent grain per dataset, as applicable. 7. How much historical data will they need? 1. Does the stakeholder need data for the last few years? 2. Is the historical data available somewhere? Ask these questions upfront, and you'll save countless hours while delivering exactly what stakeholders need. - Like this post? Let me know your thoughts in the comments, and follow me for more actionable insights on data engineering and system design. #data #dataengineering #datastakeholder
-
CAPEX estimation for low maturity technology projects is challenging, particularly when we talk about new equipment. Yet, we still need to be able to get fairly accurate figures to justify the viability of the technology and secure funding for its development. How to do it? Here is what we usually do for hydrogen and carbon capture projects. 1. Define the Project Scope Start by clearly outlining all project boundary, objectives and deliverables. Identify every cost elements required for full scale implementation, from engineering and design to construction and commissioning, while distinguishing between one-off investments and those that can be standardised. 2. Develop the first-of-a-kind CAPEX Estimate • Detailed Bottom-Up Analysis: Break down the project into its individual components, accounting for bespoke engineering, pilot testing, specialized installations, and comprehensive project management. • Risk and Contingency: Due to the innovative nature and inherent uncertainties of FOAK projects, incorporate generous contingencies to cover design modifications, unforeseen challenges, and regulatory uncertainties. • Documentation: Maintain thorough records of assumptions and decisions made during this phase, as these will inform future projects. 3. Estimate to the nth-of-a-kind estimate with learning curves Leverage the insights from the FOAK phase to isolate repeatable cost elements. With each subsequent build, learning curves drive efficiencies: • Standardize Processes: As you replicate the project, streamline designs and processes. • Realize Efficiency Gains: Experience leads to better vendor relationships and operational refinements, translating into significant cost reductions for repeatable components. • Adjust Estimates: Update your cost models to reflect these improvements, using your own or reported learning curves, ensuring more accurate and lower capital expenditure projections for future projects. 4. Implement Continuous Improvement Regularly revisit and refine both FOAK and NOAK estimates. As more operational data becomes available, adjust your assumptions and conduct sensitivity analyses to maintain a robust, realistic capex projection. How do you estimate CAPEX for your technology? #Innovation #research #hydrogen #carboncapture #science #scientist #chemicalengineering
-
How do you gather requirements... when there's literally nothing to start with? (Brand new project in the Discovery / Inception Phase) If you’ve ever been handed a brand-new project and thought: “Where do I even start with requirements?” ...you’re not alone. When there’s no existing system, no previous project to reference, and stakeholders aren’t quite sure what they need yet — it can feel like you're building the plane while flying it. Here’s the approach I use as a Business Analyst when I’m brought in at square one: → Understand the business context. What are we trying to solve? Why now? → Map out key stakeholders. And don’t just talk to the usual suspects — bring in Legal, Compliance, Security, etc., early. Cross-functional input saves you rework later. → Break the project into logical categories. Whatever the project is delivering, try to break it down into high-level process steps. This will help when workshopping requirements with stakeholders, so they can focus on different requirements. → Capture high-level needs. And yep - I use user stories here too. Even at this early stage. It keeps things outcome-focused. → Document just enough. I don’t write 50-page BRDs anymore. I use Confluence tables, Jira, and lightweight templates that the whole team can engage with. → The goal at this stage? Clarity, alignment, and momentum... not perfection. Because let’s be honest: the first version of your requirements will evolve. And that’s a good thing. 💡 Want to become the kind of BA who can confidently lead from day one of a project? Learn how to: ✅ Guide discussions when the path isn’t clear ✅ Keep documentation lean but effective ✅ Become the go-to for “what are we actually trying to do here?” Question for you...How do you approach requirements for a brand new project? Do you use a BRD, Confluence, sticky notes… or something else? If you found this helpful, give me a follow Matthew Thomas I share regular micro-lessons to help you level up your BA career. #BusinessAnalysis #RequirementsGathering #NewProjects #BusinessAnalystLife #AgileBA #LeanDocumentation #UserStories #Confluence #Jira
-
#VPspeak [^581] We spend a lot of time talking about AI writing code, but the most expensive mistakes in software don't happen in the IDE, they happen in the Requirements phase. 📉The traditional process of manual interviews and static Business Requirement Documents (BRDs) is undergoing a major shift. ✅ The goal isn't just to write documents faster; it's to ensure we are building the right thing from day one. Here is how AI is changing the front-end of the SDLC: 1️⃣ Collapsing the Analysis Timeline: Traditionally, gathering requirements meant weeks of stakeholder meetings and manual synthesis. AI tools can now ingest hundreds of hours of raw interview transcripts, support tickets, and legacy docs to identify core themes and contradictions in minutes. It’s turning a three-week discovery phase into a three-day validation exercise. 🚀 2️⃣ Detecting "Requirement Drift" Early: One of the biggest risks in any project is conflicting requirements from different departments. AI models are now used to cross-reference new requests against existing business logic and technical constraints. If a new requirement contradicts a core system capability, the "Value Engineer" gets a red flag before a single line of code is written. 🧠 3️⃣ From Static Text to Interactive Prototypes: Instead of a 60-page PDF that nobody reads, AI is being used to convert BRDs into functional wireframes and "intent-based" prototypes instantly. This allows stakeholders to see the requirement in action. It moves the conversation from "I think I understand the doc" to "I can see exactly how this feature will work." 🥊 🏗️ The Bottom Line: Requirement analysis in 2026 is becoming less about documentation and more about Clarity. By using AI to handle the synthesis and consistency checks, we reduce the "Interpretation Tax" that usually leads to costly mid-project pivots. 🥊For me, I have seen this work first hand. What do you think? Is your team still wading through 50-page BRDs, or have you started using AI to get to the "Definition of Ready" faster?
-
#APQP in IATF 16949 (Automotive Quality Management System) APQP (Advanced Product Quality Planning) is a structured, preventive quality planning methodology required by IATF 16949 to ensure that products meet customer requirements, are robust at launch, and are capable in mass production. 🔹 Why APQP is Important in IATF 16949 IATF 16949 focuses on risk prevention, defect avoidance, and process robustness—APQP is the core tool to achieve this. APQP helps to: Prevent defects before production Reduce launch issues & customer complaints Ensure cross-functional coordination Meet customer-specific requirements (CSR) Demonstrate compliance during IATF audits 📌 APQP is mandatory for automotive suppliers (#Tier-1, #Tier-2, etc.) 🔄 APQP – 5 Phases (As per AIAG & IATF) #Phase 1: Plan & Define Program Goal: Understand customer needs and risks Key Outputs: Voice of Customer (VOC) Feasibility study Risk assessment Product quality goals Project timing plan 📌 IATF Clause Link: 8.2, 6.1 (Risk-based thinking) #Phase 2: Product Design & Development Goal: Design product that meets functional & quality requirements Key Outputs: DFMEA Design Reviews Design Verification & Validation Special Characteristics identification 📌 If design responsibility exists #Phase 3: Process Design & Development Goal: Develop a stable & capable manufacturing process Key Outputs: Process Flow Diagram PFMEA Control Plan (Prototype / Pre-launch / Production) Work Instructions Layout & capacity planning 📌 Very critical for Gear Manufacturing #Phase 4: Product & Process Validation Goal: Validate product and process before SOP Key Outputs: PPAP submission MSA (Gauge R&R) SPC / Process capability (Cp, Cpk) Run @ Rate Initial sample inspection report (SIR) 📌 IATF Clause Link: 8.5.1.1, 9.1 #Phase 5: Feedback, Assessment & Corrective Action Goal: Continuous improvement after SOP Key Outputs: Customer feedback & PPM monitoring Lessons learned Corrective actions Process audits & LPA 📌 IATF Clause Link: 10.2, 9.2 📄 Key APQP Documents (Audit Focus) #APQP Timing Plan #DFMEA / PFMEA (linked) #Control Plan (linked with PFMEA) #MSA & SPC records #PPAP approval #Change management (4M) #Customer approvals ⚠️ Common audit gaps ❌ APQP treated as paperwork ❌ Weak linkage between PFMEA & Control Plan ❌ 4M changes without APQP review ❌ Lessons learned not captured Lessons learned not capture ⚙️ APQP in Gear Manufacturing (Practical Focus) Tooth profile, lead & runout → Special Characteristics Heat treatment risks → PFMEA focus Fixture & gauge capability → MSA critical Tool wear & setup change → Control Plan updates Noise & durability → Validation testing #APQP #IATF16949 #AutomotiveQuality #QualityEngineering #PFMEA #DFMEA #PPAP #MSA #SPC #GearManufacturing #RiskBasedThinking #ContinuousImprovement
-
+1
-
It’s easy to say the problem is money. Or procurement. Or that a new “rapid adoption” plan will fix it all. It won’t. Defence capability is a layered cake and we keep arguing about the icing. First layer is political will. What is allowed, what is not, what capabilities, how much money, how much power and decision making to defence, what data can be shared, how AI can be used- these are national decisions, defined by national laws. Very political. Very uneven across Europe. Second layer: for 30+ years, since the Cold War ended, defence in most countries has been cut, leaned out, and optimised for stability. People, structures, doctrines, hierarchies they don’t stretch like rubber bands. You don’t just “add people” and expect linear growth. A lot of this needs redesign, not reinforcement. Then comes industry. Scaling a traditional defence industrial base is hard because it requires long-term, credible commitments over decades. Not crisis spikes. Not political cycles. Without that, you don’t build factories, supply chains, or production depth. Then talent. Engineers, EW experts, cyber specialists among many others. This is long-cycle, rigid, unglamorous work. You also can’t hire just anyone- backgrounds, ties, clearances matter. That narrows the pool further. Then supply chains. Raw materials. Chips. Equipment to produce equipment. We’ve already seen how fragile this gets when something breaks or gets cut off. Then innovation. Startups and SMEs move fast but most cannot often scale. Testing is limited. In areas like EW, Europe doesn’t even legally allow testing against real, current threat environments like those seen in Ukraine. That matters. And across all of this: culture. Are problems named honestly? Do feedback loops actually reach decision-makers? Is risk-aversion blocking learning? Even terminology isn’t aligned between NATO and the EU, let alone across countries. So no, this is not solved by “more money”, “faster procurement”, or a single rapid adoption initiative. It’s a full system problem. Political, industrial, cultural, operational. And even this is still a simplification. Anything else is just spending without capability. #Defence #Capability #Deterrence #Europe #NATO #DefenceIndustry #Talent #RapidAdoption