Mountaineering is a sport that is near and dear to my heart. Though these days I am more of a trail runner than a climber, it has been a passion of mine that started early and turned into a hobby that shaped my life choices. As a weekend warrior, I managed to get a few memorable days out in the mountains. Some of these were for alpine climbs that involved long treks with camping and climbing gear, scaling moderately technical routes, returning to camp exhausted in the dark, trying to sleep with cramps of an overtired body.
From planning stages to execution I have often thought there are similarities between how one would approach alpine climbs and sizable business projects:
Planning the route involves researching both technical and non-technical aspects. There is usually enough literature out there that helps to determine how to approach, where to camp and find water, how to climb, how to descend, and if needed where to bivouac. It is good to have a plan, but in the end plans are only plans and in execution you may have to improvise as outside elements will always have the upper hand. Both advance planning and improvisation require experience.
You need to have a team with overlapping and complementary skills. Some sections of the climb could be more technical and it would be good to have a team member proficient in that technique, but the division of labor is not always possible and it would help if everyone could lead (i.e., deliver).
The route is often chosen depending on how strong the team is, but sometimes you set a goal beyond current levels and need to train for your objective both as an individual and as a team. Individual training mostly means hiring the right talent in business, but practicing as a team is more relevant. Your prior accomplishments in delivering complex projects help build the team dynamics that would guarantee future successes.
Being out in the wilderness means you would need to be self-sufficient after committing to the route as help could be tens of miles away. Same commitment applies to most technically-involved business projects. Once some of the legacy data workflows are altered, the only choice is to deliver the new tool.
Long routes could be intimidating but it is always good to focus on the next section. When you trust the process while keeping an eye on time, a proficient team would collaboratively tackle consecutive challenges and manage to finish the climb according to the plan, similar to agile project management.
In alpine climbs there is a balancing act between moving light & fast as a small team vs. heavy & slow as a larger team. It is not always clear if one approach is riskier than the other. Moving fast could mean a higher probability of making mistakes, but at the same time a larger team would mean a slower climb with more exposure to weather changes. The choice depends on the team’s experience and the route, but moving light & fast is usually the better style.
Accidents tend to happen on the descent when everybody is tired and time is running short (either when it is getting dark or about to rain/snow). Even when weary you have to keep your composure and not take shortcuts in safety, which requires the team to have the techniques internalized. It is not ideal to be a novice team in this stressful situation.
If your gut tells you that you are not ready to commit to the route, there must be a reason and postponing the climb is usually the right choice as there is always another day. Technical rescue is not something with guaranteed success and may not be even possible, so it is better to be safe than sorry. You can train more and come back when the time is right.
Özgür
April, 2024
When correctly configured, ‘analytics’ adds tremendous value to companies. Having a background in this domain, I am happy to see that more companies now recognize its potential and invest resources into building tools and processes around it. Here is my take on how to approach projects that involve analytics in order to maximize ROI. My examples are from a planning perspective but observations apply to other areas as well.
Initial design is critical - Many operational decisions will have a systematic dependence on the initial analytical assumptions and it could be very difficult to change these later. It is good practice to involve a capable cross-functional team who can blend analytical and domain knowledge in design stages.
Plan for user interaction & estimate workload - Unless the goal is to manage every decision algorithmically, you will need some level of user input (often referred to as 'blending art & science'). You have to estimate how often users will need to review/revise these touch points. If the level of interaction can stay as exception management, adoption will not be an issue. Otherwise, users will be overwhelmed with the required effort and can end up developing their offline workaround solutions.
Make analytics available where users need it - If the goal is to support a decision making process, any manual effort to bring analytics from another source means lost time and potential mistakes. Data is needed at the right level of granularity and at the point of decision making.
Own end-to-end how analytics is consumed - Options could be limited due to dependencies on other systems but having direct access to how users consume analytics (both estimates and overrides) would ensure it is used consistent with behind-the-scenes assumptions driving the calculations. Without knowing these critical assumptions, some other team can design a downstream workflow that uses your estimates with unintended and possibly wrong consequences.
You will make mistakes - There is no off-the-shelf standard approach to analytics projects and mistakes are unavoidable. In most cases the design will need to take existing workflows into account, some of which are ingrained into organizational structures. Use an iterative approach and learn from your mistakes. If your planning software vendor is not offering a capability to quickly revise initial assumptions, there is a considerable risk that the end-result may become irrelevant sooner than planned.
Spreadsheets help - Blackbox approach often leads to user questions. Some of the analytical calculations will no doubt require sophisticated algorithms and it is not practical to explain every detail, but users still need guidance in terms of how underlying assumptions work. Easy access to spreadsheets (either as a tool itself or the ability to seamlessly export) will help users validate calculations and identify unexpected behavior.
Periodic calibration is a must - Embrace the fact that leveraging analytics to deliver business value is a journey. Even if there is no major change in system dynamics, you will still need to calibrate models regularly. Measure accuracy of key metrics and use these calibration efforts as an opportunity to assess user satisfaction and to identify early signals of any foundational changes that might be needed down the road.
Poorly configured analytics hinders - The ever-changing dynamics of business make analytics a critical component of planning. The alternative may not even be feasible and you may have to design workflows centered around analytical assumptions. This dependency also means that a poor effort in design or execution could exacerbate instead of helping.
Özgür
March, 2024
It is fair to categorize the collective IT tools, software and processes in a company as a complex system. They evolve over time with many cross-dependencies and are very critical to the operational efficiency. When things need to change, there is usually not enough time or budget to do a clean-slate redesign and the effort could end up being a partial upgrade or a patch. Especially, if it is a dynamic industry such as retail with constant change in consumer behavior and complex supply chains, the delta between ‘as is’ and ‘to be’ states in IT systems is ever increasing.
Planning teams, however, cannot afford to operate in the ‘as is’ state only. After all, even if it is a new channel not yet defined in systems, they need to somehow forecast sales and write the PO in time before leadtime restrictions hit.
Not only handling ‘new’ is difficult, but there could also be a mismatch in the ‘existing’ data components, such as the granularity of data available in systems and the level needed for planning. For instance, IT might choose to define several location IDs for the same store (e.g., sales, returns, display, and a DC location ID for units allocated/in-transit to the store). However from a planning perspective, they are all one location and a total system inventory is most likely the best metric to base inventory decisions.
Another major issue could be due to the planning teams’ need to dynamically move SKUs or Styles between merchandize hierarchies or collections without going through painful & irreversible reclassing efforts, so that they can have a better forecast. Similarly, the team might choose to define new product attributes (e.g., pricing strategy) that could be relevant for planning needs without affecting source systems.
An interesting need that we observed at Demand Known was due to a client’s shift in supply chain strategy, driven by new tariffs introduced to one of their origin countries. They moved their operations to another country and started purchasing the exact same SKUs from other vendors (or factories of the original vendors). By design, IT systems assigned new IDs to these SKUs, though there was no visible change to the product from consumers’ perspective. Having two separate SKUs means all the valuable historical reference is lost or misplaced, and it would be a considerable manual effort to reconcile old & new SKUs when planning forward projections.
In order to close the gap between the source data and planning needs of our clients at DK, we have developed a concept we call ‘virtual planning layer’. With simple user driven mapping tables and algorithms we transform the original source data to a granularity that would let our users plan at the level they need with minimum effort.
Özgür
February, 2024
In retail, the common approach to define categories is based on product attributes. It is the natural result of how consumers may search for a product when they are shopping in a (physical or online) store. They might have a very clear idea of what they want to buy, or maybe they know the general area of need and are searching for some inspiration from the retailer. In any case, categories will lead them to the right section of the store, and when there is a match it is win-win for both parties.
Although product attributes are the starting point, from a planning perspective it is not as clear as what consumers see. First of all, companies have a tendency to define categories within a product hierarchy (division > department > class), which usually aligns with an organization structure managing the business with clear roles & responsibilities. If the retailer has a stable business, these product-driven definitions would usually converge in a manageable structure over time (after many painful "reclass" efforts in IT systems). However, the critical assumption here is ''stability" and as a lecturer of mine in grad school used to say strong assumptions mean weak theorems.
In the retail age of multi/omni-channels & social shopping, parts of these legacy hierarchies could lose relevance soon after they are laboriously defined in systems. Retailers would find themselves in a tough spot, unless they have a planning capability to:
add/remove categories like peanuts
define categories outside your product hierarchy (such as channel, region, or other attributes like customer segments)
manage categories at different levels (class in one channel and department in another).
In a planning software implementation it would be very difficult to nail these definitions forming the building blocks of the process. Having quick implementation cycles and an iterative approach to revisiting "business requirements" without depleting budget would certainly help.
Özgür
January, 2024
When we first designed our planning solution service at Demand Known (DK), retail companies were beginning to realize that multi/omni-channels were the new state of the industry. The initial approach was more consumer-focused than a true supply chain redesign. As long as companies could offer products to consumers through different channels they would be 'multi-channel retailers'. Which made sense considering the fact that supply chains and their supporting IT systems are difficult to adapt to change. However, companies quickly realized that the legacy silo approach to managing distinct sales channels through distinct physical supply chains was not going to work. They have to define/repurpose channels and look for logistics efficiencies across supply chains like never before. This necessitates a more dynamic approach to defining "categories", which are the foundational building blocks of a planning solution.
As DK, I am glad that we noticed this newly increased complexity early on and prioritized less visible components (backend architecture, flexibility of our data model) over UI. Of course, this does not mean user interaction is not an integral part of the planning process, but as long as all data components are exposed to the user at the point decision making (history, analytics, overrides) with an easy to interact format such as spreadsheets, having a flexible architecture is a better guarantee for the success of the solution.
Özgür
January, 2024
Originally, I set up this website for my LLC while back, but since then my life has changed quite a bit. After two kids and a SaaS company (Demand Known), thought I should turn it into a blog for all things data, analytics, planning & retail. Hope you enjoy my musings :)
Özgür
January, 2024
As I build planning tools and deliver consulting projects I have started thinking about an interesting question.
Is it really planning software that companies need? A software package with well-defined workflow, logic and user interface that handles data, presents historical performance and future projections to users, lets them decide a course of action for their business… Being ‘well-defined’ is the key here, and in a changing business environment very few planning processes are so:
They could be well-defined today but will they be tomorrow? They may not be even relevant in the future.
If a re-organization effort artificially removes/introduces product categories?
When a company decides to expand into new geographies & channels?
Or when a supply chain is redesigned by adding a new distribution center?
In addition to these there is also a new challenge/opportunity redefining many industries as more data are available to companies. Here is another plausible scenario: while in pursuit of predictiveness what if the talented data scientist in your team correlates a new data set (let’s say mall traffic projections) with your forecast and shows that you can explain your demand reasonably well with this new relationship. In many companies it would be years before this information is leveraged as a planning capability (if at all). Unless…
Unless they have invested into building a ‘planning platform’. The definition is probably subjective (and quite possibly ever-evolving), but here is my version of it:
The most critical component is the scalability & availability of all data sources to any application. In other words, you should be able to add new tables / fields / rows into your platform while letting all applications access source & staging tables (with certain limitations not to keep admins on their toes all the time). By the nature of planning (i.e., decisions & physical flow of goods can be modeled as time series) I think relational databases would be sufficient for most cases; however, a NoSQL database could be needed (and increasingly more) as companies explore social networks and other unstructured data sources.
Similarly, a scalable analytics engine is needed (on the server side) in which all algorithms reside & crunch through data sources.
A sandbox environment where your data scientists can experiment with next generation algorithms.
Because adapting to changing conditions is a fundamental component, the business intelligence (BI) & decision making (DM) capabilities somehow need to merge in the planning platform. Traditionally BI (one-way information flow) and DM (two-way information flow) tools are separated from each other, essentially due to the complexity of the latter (i.e., data architecture and workflow). Nevertheless, I believe it is possible to build a flexible architecture if you are OK with some coding effort associated with each change.
A cloud-based architecture might be the ideal platform for the above definition. Alternatively, if it is a reasonably sized company you could achieve the same with an onsite server. And I could even argue that the combination of SQL, R, and Excel is the ultimate planning platform for many small companies.
Because change is the only constant in business, the emphasis is really on the accessibility of data sources & flexibility of overall architecture (databases as well as user-interface). A planning platform will require a more hands-on approach than some companies may prefer in terms of coding & maintenance efforts, but I believe this would be the most effective approach to planning in the near future.
Özgür
Originally posted: April, 2014
One of the interesting transitions I have witnessed in Retail has been the significant change in all dimensions of data. Not only more granular but also more categories of data have become more accessible to planning functions.
Retail companies are the closest entities to the end-customer of supply chains: consumer. This setup requires collecting tremendous amounts of very detailed transaction data, a must in retail business. Consider the following situation:
Customer: For this very random reason, I’ll rightfully return this product that I don’t remember from which store I bought.
Retailer: Ooops, we cannot see how much we charged you for this product, so we’ll credit the ticket price back to you.
Hence, my observation of big data getting even bigger is due to this proximity to the consumer factor. Yet, what I argue is that this transition is parallel to what is happening to our society as more computational power is now available. Smart phones, social networks, image processing, cloud computing are now few keywords that are frequently pronounced in our everyday lives. Thus, what companies do is to enjoy what Moore’s Law predicts.
The big data transition is a great opportunity for a manager who knows how to utilize it. The possibility of localized assortment planning with reliable store-SKU forecasts, localized pricing, demand planning with social networks and web-based trends are now very real. The limitation is neither the analytical capability nor the existence of reliable data, but legacy planning systems (which will eventually be retired).
It could also be a curse if all you know about data is limited to spreadsheet programs. My humble observation is that there is now an ever-expanding gap: the rate of increase in the data very much exceeds the spreadsheet capabilities of individual users. Getting the big picture could now be a difficult task. Not knowing what to do with so much data, the risk is to either get stuck in an outlier case or draw conclusions based on a limited sample.
Given the increasing gap, the traditional way of building IT solutions based on ‘business requirements documents’ and restricted interaction is no longer viable. What big data requires is cross-functional and data-capable analytical teams that operate as intermediary functions between business and IT organizations. This is not a team simply put together from ex-business and ex-IT folks, but data scientists, optimization experts, experienced data and business analytics consultants that could unleash the capabilities of SQL and spreadsheets together. Such teams not only facilitate discussions between the two organizations but also make the tool design process more interactive with rapid experimentation of ideas. In the end, no one really knows what assumptions and models about data would consistently work.
Most companies are organized as silos, so are their datasets (though IT might be storing these tables in the same server). Thus, in addition to enhanced IT-business interaction, using data as the common ground, these analytical teams could work with different business functions that do not communicate (at least not enough to impact business). For example, the inventory management teams would use sales and inventory data, but they may not know or may choose to ignore store traffic trends that consumer insights teams usually track. Yet these two sets could well be related: a decreasing sales trend in a well-inventoried store could be due to an assortment problem, even if there is sufficient traffic. Nevertheless, looking into silos of data will not help break the cyclic nature of retail planning: sales drop → buy less inventory → sales drop.
Usually, IT organizations are not seen as centers of innovation. However, big data presents a certain technical challenge to business units. Cloud based solutions may help individual business functions maintain their own small data warehouses, yet a holistic approach demands IT’s greater involvement. My proposed solution of intermediary analytical teams would be best destined for success if they can easily access IT’s computational capabilities.
In summary, big data is now more accessible, and companies continuously explore new ways of using these datasets to increase profitability. In particular for retail, these opportunities are far greater than any other industry due to their proximity to consumers and availability of structured data (social networks, CRM, POS, inventory, traffic, e-commerce). Yet, integrating all this valuable information into a predictive planning process is a difficult task, which requires an interaction and engagement much closer than the traditional IT and business relationship. Analytical consulting teams with enhanced data-capabilities who could facilitate and guide this interaction are now more important than ever in achieving this goal.
Özgür
Originally posted: February, 2014