Aimée Cowher joins board of Directors, Ronald Macdonald House of Detroit

Picture of Aimée CowherCongratulations to Aimée Cowher

Thursday February 27, 2014

GPS is proud to announce that co-founder and CEO Aimée Cowher was recently elected to the Board of Directors of Ronald McDonald House of Detroit.  Aimee has supported the organization since 2006.

Devoted to the renaissance of the local Detroit area, Aimée is involved with several local charities and organizations that are working to ensure the Greater Detroit Area has a prosperous and sustainable future.

 

 “Aimee, your continued leadership, financial and volunteer support and history as a past resident made a great impression on our board members… I cannot wait to have you on our dynamic team of board members.”

Jennifer J. Litomisky
Executive Director of RMHC of SE MI

Aimee is also the co-founder of the Kyle John Rymiszewski Foundation; a charity supporting the Children’s Cardiomyopathy Foundation in remembrance of her son Kyle, who lost his life at 16 to Hypertrophic Cardiomyopathy.

While acting on the board of directors, Aimée will continue her roll as CEO of Global Productivity Solutions.

Please join us in congratulating Aimée in her newest endeavor.

The Global Productivity Solutions Team

[xyz_lbx_default_code]

Creating Enhanced Organizational Capabilities

Guest post by Rob Wardlow.

When seeking to improve the organizational capability for an business it is often helpful to envision the optimal state that the organization is attempting to achieve.

So, in the continuous improvement arena, what does the optimal state look like?  Let’s tackle this from three complimentary avenues: structure, knowledge, and culture.

What should you examine regarding structure for a continuous improvement organization?  I think that most of us would acknowledge that we work in resource constrained environments, and thus, prioritizing the things that will be worked on is of critical concern.  In that regard, then, it is essential that a structural mechanism on identifying and prioritizing improvement opportunities is going to be an essential aspect of an optimal CI organization.  Once CI opportunities are identified, prioritized and ultimately assigned to be addressed, the organization needs some mechanism to address the opportunity.  The spectrum goes from assigning the CI activity to those in the area directly with no support all the way to the other end of the spectrum where the activity is assigned to some entity outside of the area to work the opportunity.  The type of opportunity will dictate the approach taken, but let’s just discuss a few and which approach is likely the best.  First come existing opportunities that have existed for some time.  These probably need to be addressed with outside expertise.  The reason?  If internal folks had the knowledge and expertise to improve the item they most likely would have done so already.  Next comes a situation where something new is going to be created or significant modifications are going to be made.  In this case you likely want those who are conducting the activity to apply some specific tools and techniques so as to prevent problems from creeping into the solution.  Lastly, there is a situation where emotional buy-in of the solution is critical to sustainment of the improvement.  In such a situation having a facilitated approach where the owners of the process are brought along in the journey to the improved state.

I’ve intentionally separated the knowledge component from the structural component.  Many organizations (mostly through the insistence of consultants) equate a certain body of knowledge as requiring a particular organizational structure.  I think that this is backwards.  You first need to understand the structural component and once that is addressed then you can identify the appropriate knowledge component needed to address the need.

What body of knowledge exists in the CI world that addresses identification and prioritization of opportunities?  The best answer is Theory of Constraints, not part of many organizations BOK.  The TOC approach identifies that there is one constraint in a system – and only by addressing the constraint can you truly improve the system.  Next comes LEAN, as it addresses wastes that exist and how to reduce and eliminate them.  One source of waste that may be uncovered is related to quality and productivity of a system.  When excessive variation causes quality and/or productivity issues, these can be best addressed by a DMAIC Six Sigma approach.  And when looking to ensure that problems never enter the system in the first place, then application of a Design for Six Sigma method is appropriate.  Don’t confuse DfSS as merely regular six sigma applied to design, rather it is a methodology that seeks to anticipate and prevent rather than uncover and reduce.

Lastly, but certainly not least, is culture.  Culture is going to be largely driven by the structure and knowledge described above.  In addition to that will be cultural issues like rewards systems, advancement opportunities, etc.  Careful examination of the impacts of these cultural aspects is needed.  As an example, some organizations choose to incentivize improvements by rewarding the close-out of cost saving projects.  This can result in the same “problems” being solved time after time, with no real improvement resulting.  Likewise, a policy of promotion by serving in a CI position can result in rapid turn-over of projects for promotion with little actual improvement.

Envision the structure, knowledge and culture of your ideal CI organization as the first step in bringing it about.

Smart Operational Excellence® Coaches Offer Genuine Interest and Ask the Right Questions

Alex Figueroa

Guest post by Alex Figeroa

As businesses leverage Operational Excellence®, results matter. The huge positive impact that Operational Excellence® efforts can have on business results has been typically measured by translating primary and secondary process metrics to financial results where reduced cost, lower levels of cash to operate a business and enabling more revenue are, generally, the ultimate goals.

There is however, one more dimension of business results that can be attained by disciplined operation, recovering losses and eliminating excess process variation: enhanced individual and organizational capabilities. These critical organizational results might be harder to measure than conventional KPI’s. At the end of the day, Operational Excellence® – through proper coaching and mentorship – can unleash further opportunities for any business to improve continuously through the aligned efforts of knowledgeable, passionate, motivated and talented individuals. And doing so can become a clear enabler to creating more value for current and future customers.
329649_9775Now Operational Excellence® coaching is not so much about teaching and training others, but collaborating with businesses and individuals to reach a better state. A new state that delivers not only better operational results but achieves its goals by building the right structure, creating superior knowledge and promoting the desired culture across the organization. From my vantage point, the best advice Operational Excellence® coaches can give is demonstrating genuine interest on the people they are coaching and asking the right questions.

When Operational Excellence® professionals actively coach others with an open expectation to challenge what is possible, every individual involved – including the coach – reaches a better state. And the approach to coaching might be really simple:

  • What – Expose people to new and relevant tools they were not aware of before and that can be used to solve the operational challenges they are facing.
  • How & When – Coach others to use the tools in the right context, with discipline, to objectively experiment with what they have learned. Regularly provide informal feedback.
  • Who and Why – Operational Excellence tools will be used by individuals who want to learn them beyond rational and analytical reasons, there is always a deeper purpose: a compelling cause, strong beliefs and cultural elements to deal with. Being genuinely interested in the person and asking the right questions facilitates the process of uncovering what this purpose is, how to connect it to the tools being learned and to the improvement work being performed.

While I do not have hard facts as those that can be drawn by a logistic regression or a well designed experiment with conclusive evidence, my experience of over 2 decades coaching and mentoring others tells me that focusing on helping the individuals, as much as or even more than improving a process itself, leads to better and sustainable business and organizational results.

What is your experience coaching others?

[xyz_lbx_default_code]

FCPC Trade Talk Breakfast

GPS was pleased to sponsor what is likely to be to largest attended Trade Breakfast of the Year with Lee Tappenden Chief Merchandizing Officer for Walmart Canada.

Lee talked about the challenges of keeping the shelves full for 11,005 Wamart stores worldwide across 27 countries; with Canada having 380 stores employing over 95,000 associates. Walmart has one common purpose that translates across the world, saving people money so they can live better! They accomplish this by selling products at unbeatable prices.
To do all this they have the same challenges as most companies however there passion for price, translates to their passion for efficiencies. In order to satisfy their mission they must, operate for less, buy for less, and grow sales every day. The Trade Breakfast audience today was made up of mostly sales reps from Walmart suppliers who know all too well their passion for price reduction.

If you’re facing the same pressures from your customers don’t just lower your margins,  dig deep with an end to end supply chain assessment and quantify your potential. We’ve successfully developed and executed assessments within consumer products. If you want some ideas let’s talk.

[xyz_lbx_default_code]

How To Avoid The Evils Within Customer Satisfaction Surveys

Guest Post by Rob Brogle, Global Productivity Solutions. Originally posted on iSixSigma October 24, 2013

I.  Introduction

 Ever since the Ritz-Carlton Hotel Company won the Malcolm Baldrige National Quality Award for the second time in 1999, companies across many different industries began trying to follow their lead in attempting to achieve the same level of outstanding customer satisfaction.  This was a good thing, of course, as CEOs and executives began incorporating customer satisfaction into their company goals and communicating frequently to their managers and employees about the importance of making customers happy.

When Six Sigma and other metrics-based systems began to spread through these companies, it became apparent that customer satisfaction needed to be measured using the same type of data-driven rigor that other performance metrics (processing time, defect levels, financials, etc.) were utilizing.  After all, if customer satisfaction was to be put at the forefront of a company’s improvement efforts, then a sound means of measuring this quality would be required.

Enter the customer satisfaction survey.  What better way to measure customer satisfaction than asking the customers themselves?  Companies immediately jumped on the survey bandwagon—using mail surveys, automated phone surveys, e-mail, web-based, and many other platforms.  Point systems were used (ratings on a 1-10 scale, 1-5 scale, etc.) that produced numerical data and allowed for a host of quantitative analyses.  Use of “Net Promoter Score” (NPS) to gauge customer loyalty became a goldmine for consultants selling these NPS services.  Customer satisfaction could be broken down by business unit, department, and individual employee.  Satisfaction levels could be monitored over time to determine upward or downward trends; mathematical comparisons could be made between customer segments, product or service types.  This was a CEO’s dream—and it seemed there was no limit to the customer-produced information that could help transform a company into the “Ritz-Carlton” of its industry.

In reality, there was no limit to the misunderstanding, abuse, wrong interpretations, wasted resources, poor management, and employee dissatisfaction that would result from these surveys.  Although there were some companies that were savvy enough to understand and properly interpret their survey results, the majority of companies did not.  And this remains the case today.

What could possibly go wrong with the use of customer satisfaction surveys?  After all, surveys are pretty straightforward tools that have likely been used since the times of the Egyptians (Pharaoh satisfaction levels with pyramid quality, etc.).  The reality is that survey data has a lot of potential issues and limitations that makes it different from other “hard” data that companies utilize.  It is critical to recognize these issues when interpreting survey results—otherwise what seemed like a great source of information can cause a company to inadvertently do many bad things.  Understanding and avoiding these pitfalls will be the focus of this commentary.

II.  Survey Biases and Limitations

Customer satisfaction surveys are everywhere; in fact, we tend to be bombarded with e-mail or online survey offers from companies who want to know our opinions about their products, services, etc.  In the web-based world of today, results from these electronic surveys can be immediately stored in databases and analyzed in a thousand different ways.  However, in virtually all cases the results are wrought with limitations and flaws.  We will now discuss some of the most common survey problems which include various types of biases, variations in customer interpretations of scales, and lack of statistical significance.  These are the issues that must be taken into account if sound conclusions are to be drawn from survey results.

A.   Non-response Bias

 Have you ever called up your credit card company or bank and were asked to stay on the line after your call is complete in order to take a customer satisfaction survey?  How many times do you actually stay on the line to take that survey?  If you’re like the vast majority of people, you hang up as soon as the call is complete and get on with your life.  But what if the service that you got on that phone call was terrible, the agent was rude, and you were very frustrated and angry at the end of the call.  Then would you stay on the line for the survey?  Chances are certainly higher that you would.  And that is a perfect example of the non-response bias at work.

Although surveys are typically offered to a random sample of customers, the recipient’s decision whether or not to respond to the survey is not random.  Once a survey response rate dips below 80% or so, the inherent non-response bias will begin to affect the results.  The lower the response rate, the greater the non-response bias.  The reason for this is fairly obvious:  the group of people who choose to answer a survey is not necessarily representative of the customer population as a whole.  The survey responders are more motivated to take the time to answer the survey than the non-responders; therefore, this group tends to contain a higher proportion of people who have had either very good, or more often, very bad experiences.  Changes in response rates will have a significant effect on the survey results.  Typically, lower response rates will produce more negative results, even if there is no actual change in the satisfaction level of the population.

 B.   Survey Methodology Bias

 The manner in which a customer satisfaction survey is administered can also have a strong impact on the results.  Surveys that are administered in person or by phone tend to result in higher scores than identical surveys distributed by e-mail, snail mail, or on the web.  This is due to people’s natural social tendency to be more positive when there is another person directly receiving feedback (even if the recipient is an independent surveyor).  Most of us don’t like to give people direct criticism, so we tend to go easy on them (or the company they represent) when speaking with them in person or by phone.  E-mail or mail surveys have no direct human recipient and therefore the survey taker often feels more freedom to give negative feedback—they’re much more likely to let the criticisms fly.

Also, the manner in which a question is asked can have a significant impact in the results.  Small changes in wording can affect the apparent tone of a question, which in turn can impact the responses and the overall results.  For example, asking “How successful were we at fulfilling your service needs” may produce a different result than “How would you rate our service?”  Even the process by which a survey is presented to the recipient can alter the results—surveys that are offered as a means of improving products or services to the customer by a “caring” company will yield different outcomes than surveys administered solely as data collection exercises or surveys given out with no explanation at all.

C.   Regional Biases

Another well-known source of bias that exists within many survey results is regional bias.  People from different geographical regions, states, countries, urban vs. suburban or rural locations, etc. tend to show systematic differences in their interpretations of point scales and their tendencies to give higher or lower scores.  Corporations that have business units across diverse locations have historically misinterpreted their survey results this way.  They will assume that a lower score from one business unit indicates lesser performance, when in fact that score may simply reflect a regional bias compared to the locations of other business units.

D.   Variation in Customer Interpretation and Repeatability of the Rating Scale

Imagine that your job is to measure the length of each identical widget that your company produces to make sure that the quality and consistency of your product is satisfactory.  But instead of having a single calibrated ruler with which to make all measurements, you must make each measurement with a different ruler.  No problem if all the rulers are identical, but now you notice that each ruler has its own calibration.  What measures as one inch for one ruler measures 1¼ inches for another ruler, ¾ of an inch for a third ruler, etc.  How well could you evaluate the consistency of the widget lengths with that kind of measurement system if you need to determine lengths to the nearest 1/16 of an inch?  Welcome to the world of customer satisfaction surveys.

Unlike the scale of a ruler or other instrument which remains constant for all measurements (assuming its calibration remains intact), the interpretation of a survey rating scale varies for each responder.  In other words, each person who completes the survey has his or her own personal “calibration” for the scale.  Some people tend to be more positive in their assessments; other people are inherently more negative.  On a scale of 1-10, the same level of satisfaction might solicit a 10 from one person but only a 7 or 8 from another.

In addition, most surveys exhibit poor repeatability.  When survey recipients are given the exact same survey questions multiple times, there are often differences in their responses.  Surveys rarely pass a basic gauge R&R (repeatability and reproducibility) assessment.  Because of these factors, surveys should be considered very noisy (and biased) measurement systems, and therefore their results cannot be interpreted with the same precision and discernment as data that is produced by a physical measurement gauge.

E.   Statistical Significance

 Surveys are, by their very nature, a statistical undertaking and therefore it is essential to take the statistical sampling error into account when interpreting survey data.  As we know from our Six Sigma backgrounds, sample size is part of the calculation for this sampling error:  if a survey result shows a 50% satisfaction rating, does that represent 2 positive responses out of 4 surveys or 500 positives out of 1000 surveys?  Clearly our margin of error will be much different for those two cases.

There are undoubtedly thousands of case studies of companies who completely fail to take margin of error into account when interpreting survey results.  A well-known financial institution would routinely punish or reward their call center personnel based on monthly survey results—a 2% drop in customer satisfaction would solicit calls from execs to their managers demanding to know why the performance level of their call center was decreasing.  Never mind that the results were calculated from 40 survey results with a corresponding margin of error of ±13%, making the 2% drop completely statistically meaningless.

Another well-known optical company set up quarterly employee performance bonuses based on individual customer satisfaction scores.  By achieving an average score between 4.5 and 4.6 (based on a 1-5 scale), an employee would get a minimum bonus, if they achieved an average score between 4.6 and 4.7 they would get an additional bonus, and if their average score was above 4.7 they would attain their maximum possible bonus.  As it turned out, each employee’s score was calculated from the average of less than 15 surveys—the margin of error for those average scores was ±0.5.  Therefore, all of the employees had average scores within this margin of error and thus there was no distinguishability at all between any of the employees.  Differences of 0.1 points were purely statistical “noise” with no basis in actual performance levels.

Essentially, when companies fail to take margin of error into account, they wind up making decisions, rewarding or punishing people, taking actions, etc. based purely on random chance.  And as our friend W. Edwards Deming told us 50 years ago, one of the fastest ways to completely de-motivate people and create an intolerable work environment is to evaluate people based on things that are out of their control.

III.  Proper Use of Surveys

 So what can be done?  Is there a way to extract useful information about surveys without misusing them?  Or should we abandon the idea of using customer satisfaction surveys as a means of measuring our performance?

Certainly, it is better not to use surveys at all then to misuse and misinterpret them.  The harm that can be done when biases and margin of error are not understood is worse than the “benefit” of having misleading information.  However, if the information from surveys can be properly understood and interpreted within their limitations then surveys can, in fact, help to guide companies in making their customers happy.  Here are some ways that can be accomplished.

A.   Use Surveys to Determine the Drivers of Customer Satisfaction, Then Measure Those Drivers Instead

 Customers generally aren’t pleased or displeased with companies by chance—there are key drivers that influence their level of satisfaction.  Use surveys to determine what those key drivers are and then put performance metrics on those drivers, not on the survey results themselves.  Ask customers for the reasons why they are satisfied or dissatisfied, then affinitize those responses and put them on a Pareto chart.  This information will be much more valuable than a satisfaction score, as it will identify root causes of customer happiness or unhappiness on which you can then develop measurements and metrics.

For example, if you can establish that responsiveness is a key driver in customer satisfaction then start measuring the time between when a customer contacts your company and when your company gives a response.  That is a “hard” measurement—much more reliable than a satisfaction score.  The more that a company focuses on improving the metrics that are important to the customer (the customer CTQs), the more that company will improve real customer satisfaction (which is not always reflected in biased and small-sample survey results).

B.   Improve Your Response Rate

 If you want your survey results to reflect the general customer population (and not a biased subset of customers) then you must have a high response rate to minimize the non-response bias.  Again, the goal should be at least 80% response rate.  One way to achieve this is to send out fewer surveys but send them to a targeted group that you’ve reached out to ahead of time.  Incentives for completing the survey along with reminder follow-ups can help increase the response rate significantly.

Also, making the surveys short, fast, and painless to complete can go a long way in improving response rates.  As tempting as it may be to ask numerous and detailed questions to squeeze every ounce of information possible out of the customer, you are likely to have a lot of survey abandonment in those cases once people realize it’s going to take them more than a couple of minutes to complete.  You are much better off giving a concise survey that is very quick and easy for the customers to finish.  Ask a few key questions and let your customers get on with their lives—they will reward you with a higher response rate.

C.    Don’t Try To Make Comparisons Where There Are Biases Present

A lot of companies use customer survey results to try to score and compare their employees, business units, departments, etc.  These types of comparisons must be taken with a large block of salt, as there are too many potential biases that can produce erroneous results.  Do not try to compare across geographic regions (especially across different countries for international companies), as the geographic bias could cause you to draw the wrong conclusions.  If you have a national or international company and wish to sample across your entire customer base, be sure to use stratified random sampling so that your customers are sampled in the same geographic proportion that is representative of your general customer population.

Also, do not compare results from surveys that were administered differently (phone vs. mail, e-mail, etc.) even if the survey questions were identical.  The survey methodology can have a significant influence on the results.  Be sure that the surveys are all identical, and are administered to the customers using the exact same process.

And, finally, always keep in mind that surveys rarely are capable of passing a basic gauge R&R study.  They represent a measurement system that is extremely noisy and flawed, and therefore using survey results to make fine discernments is usually not possible.

D.    Always, Always, Always Account for Statistical Significance in Survey Results

This is the root of the majority of survey abuse—where management makes decisions based on random chance rather than on significant results.  In these situations our Six Sigma tools can be a huge asset, as it’s critical to educate management on the importance of proper statistical interpretation of survey results (as with any type of data).

Set the strict rule that no survey result can be presented without including the corresponding margin of error (i.e., the 95% confidence intervals).  For survey results based on average scores, the margin of error will be roughly ± where s  is the standard deviation of the scores and n is the sample size (for sample sizes < 30, the more precise t-distribution formula should be used instead).  If the survey results are based on percentages rather than average scores, then the margin of error can be approximated as  where p is the resulting overall proportion and again n is the sample size (note that the Clopper-Pearson exact formula should be used if np < 5 or (1-np) < 5).  Mandating that margin of error be included with all survey results will help frame the results for management, and will go a long way in getting people to understand the distinction between significant differences and random sampling variation.

Also, be sure to use proper hypothesis testing when making survey result comparisons between groups.  All our favorite tools should be utilized:  for comparing average or median scores we have the T-tests, ANOVA, or Mood’s Median tests (among others); for results based on percentages or counts we have our proportions tests or chi-squared analysis.

If we are comparing a large number of groups or are looking for trends that may be occurring over time, the data should be placed on the appropriate control chart.  Average scores should be displayed on an  and R or  and S chart, while scores based on percentages should be shown on a P chart.  For surveys with large sample sizes, an I and MR chart may be more appropriate (a la Donald Wheeler) to account for variations in the survey process that are not purely statistical (such as biases changing from sample to sample which is very common). Control charts will go a long way in preventing management overreaction to differences or changes that are statistically insignificant.

And finally, make sure that if there are goals or targets being set based on customer satisfaction scores, those target levels must be statistically distinguishable based on margin of error.  Otherwise, people get rewarded or punished based purely on chance.  In general, it is always better to set goals based on the drivers of customer satisfaction (“hard” metrics) rather than on satisfaction scores themselves, but in any case the goals must be set to be statistically significantly different from the current level of performance.

IV.  Conclusion

 Customer satisfaction surveys are bad, evil things.  Okay, that’s not necessarily true but surveys do have a number of pitfalls that can lead to bad decisions, wasted resources, and unnecessary angst at a company.  The key is to understand survey limitations and to not treat survey data as if it were precise numerical information coming from a sound, calibrated measurement device.  The best application of customer surveys is to use them to obtain the drivers of customer happiness or unhappiness, then create the corresponding metrics and track those drivers instead of survey scores.  Create simple surveys and strive for high response rates to assure that the customer population is being represented appropriately.  Do not use surveys to make comparisons where potential biases may lie, and be sure to include margin of error and proper statistical tools in any analysis of results.

Used properly, customer satisfaction surveys can be valuable tools in helping companies understand their strengths and weaknesses, and in helping to point out areas of emphasis and focus in order to make customers happier.  Used improperly, and many bad things happen.  Make sure your company follows the right path.

[xyz_lbx_default_code]

Reflections on FCPC CEO Executive Conference 2013

Guest Post by Aimée Cowher

Notes on Commander Hadfield

  • In September 2010, Hadfield was assigned to Expedition 34/35. On March 13, 2013 he became the first Canadian to command a spaceship as Commander of the ISS during the second portion of his five-month stay in space.
  • On May 13, Hadfield, landed in Kazakhstan after traveling almost 99.8 million kilometers while completing 2,336 orbits of Earth.
  • In June 2013, Chris Hadfield announced that he would retire from the CSA as of July 3, 2013 to take up new challenges
  • Follow Chris on Twitter! @Cmdr_Hadfield

If you haven’t had the opportunity to meet an astronaut, or even hear one speak, I highly recommend you add it to your bucket list.  It puts everything in perspective.  I recently had the honor to meet Commander Chris Hadfield at the FCPC CEO Executive Conference in Minett, Ontario, Canada.  The information and anecdotes that Commander Hadfield shared during his presentation makes you realize that the pressures, challenges and risks that we face in our businesses pale in comparison to being launched in a space shuttle, commanding the international space station and especially surviving the tumultuous return to Earth.

One of the most astounding insights for me was just how much knowledge an astronaut must have beyond the subject matter that comes to mind.  They have to know how to pull a tooth and conduct certain surgical procedures.  Astronauts must be skilled in scientific experimentation as the primary purpose of space exploration is to further our understanding of matter.  Do you realize that we only know 5% of the matter that makes up the universe?  That’s the percentage in the tail(s) of a 95% confidence interval!

Commander Hadfield described how they approach risk mitigation during their preparation for a mission as spending time “visualizing disaster.”  One example he gave is how they would react if one of the team lost a loved one during their time (~7 months or more) on the space station.  They each documented their desires and even acted out the scenario.  What effort do we make in our businesses to mitigate risk?  Yet, there are standard tools available to us (FMEA, for example) to facilitate the investment of mostly time from resources that know our products and processes.

Notes on The International Space Station (LSS)

  • Along with the United States, Russia, Europe and Japan, Canada is a partner in the International Space Station (ISS), an orbiting research laboratory.
  • Since the first module of the Station was launched in 1998, the Station has circled the globe 16 times per day at 28,000 km/h at an altitude of about 370 km, covering a distance equivalent to the Moon and back daily.
  • The Station is about as long as a Canadian football field, and has as much living space as a five-bedroom house
  • Canada’s contribution to the ISS is the Mobile Servicing System (MSS)—a sophisticated robotics suite that assembled the Station in space, module by module

Beyond the scientific and management brilliance that Commander Hadfield obviously possesses, it is incredible that he considered the social impact that he could have, by exploiting social networking with help from his son, to bring the whole “being in space “ experience to the masses.

He showed a graph of the visits to the NASA website that looked something like this…

hadfield_social

I recall from last year’s conference hearing advice from the President of Google Canada that you shouldn’t trust your web / social media presence to a 20-something year old hot shot that simply understands the technology.  This just reinforces that message for me!  You can utilize technology to it’s greatest capability but if you aren’t sharing content that captivates the audience it is all for naught.  You need to engage people in the organization who are passionate about your products/services, your contribution to the community and society and the value you are providing to all stakeholders.  An astronaut and his son produced the breakthrough change above!  No doubt both had the passion first and then understood or learned the technology.

None of this is meant to discount the significant contribution that other presenters – like Ben Jerry’s, McDonalds, and Sobeys – made at the conference.  Following are my take-aways from a few of the speakers presentations…

What Ben & Jerry’s have accomplished, from a ‘scoop shop’ in Vermont to having stores in 35 countries, while preserving their commitment to being a values led business is commendable.  They have 3 different mission statements – product, economic and social – that must always work in harmony, with strategic and tactical plans that support each mission.

Most intriguing is the social impact that is core to their existence and affect their business decisions throughout the supply chain.  For example, their single source for brownies, a key component to several of their flavors, hires people who were previously incarcerated and most would consider unemployable.  Other examples include their commitment to nonGMO ingredients, Fair Trade sourcing and paying a ‘Livable Wage’ vs. minimum wage ($16 vs. $8.50 per hour.)

We heard from John Betts, CEO McDonalds Canada, about their efforts and success rebranding an icon. Mr. Betts shared with us 3 pillars that guided their journey:

  1. Always listen to the customer
  2. Collaborate internally and be bold
  3. Commit to beating yesterday

The third pillar struck a chord with me.  It was refreshing to hear the commitment to continuous improvement in such a practical way that everyone in the organization can act upon.  At GPS, it’s embedded in our purpose statement – “…to engage people to (help) make your company better tomorrow than today.”  So often, we have made continuous improvement an initiative, a program, or created a special organization around it and task them to deliver results.  Our best, most successful engagements and clients are those who we have helped to make continuous improvement part of the culture.  It starts with a commitment to “beating yesterday” that is embraced and persistently pursued by all.

There were other excellent presentations that made this conference the most content rich that I’ve attended in awhile.  Kudos to the FCPC committee who put together the line-up of speakers and many thanks to those speakers for sharing their insight and experience.  And if you ever have a chance to attend a conference where an astronaut is presenting, I highly recommend going.  And in the words of Commander Hadfield, “if you ever get the chance to take a flight to space, I highly recommend it!”

 

[xyz_lbx_default_code]

 

FCPC CEO Executive Conference 2013 Recap

Aimee and I  were excited to be the Gold Sponsor of the FCPC CEO Conference.  We shared an opportunity to meet Commander Chris Hadfield and hear him speak on his journey to space and spending 5 months in the International Space Station.   I thought his photos from space were breathtaking, and really appreciated his insights on team work and preparing for disaster.  Aimee and I had a chance to chat with him, and for an Astronaut he is really down to earth 😉

Ken Bechard
Managing Partner
+1 (519) 355 8990

FCPC-AKC

Aimée Cowher, GPS Founding Partner and CEO and Ken Bechard, Managing Partner, GPS Canada pose with Commander Chris Hadfield (@Cmdr_Hadfield ) at the recent FCPC CEO Executive Conference.

[xyz_lbx_default_code]

GPS Signs on to be the Event Sponsor of the FCPC Trade Talks with Walmart Canada.

Join us Thursday October 24, 2013 at the Sheraton Toronto Airport Hotel and Conference Center for an executive lunch sponsored by Global Productivity Solutions.  We are pleased to work with Food & Consumer Products of Canada to host Lee Tappenden, Chief Merchandising Officer of Walmart Canada and look forward to hearing what he has to say.

Register on the FCPC site:  Register Now

[xyz_lbx_default_code]

 

 

Global Productivity Solutions and MoreSteam.com Partner to Provide Top Three Computer Manufacturer with Single-source Global Six Sigma Training

Clinton Township, Michigan – May 1st, 2013 – Global Productivity Solutions, an industry-leading operations consulting firm, and MoreSteam.com, the leading global provider of online Lean Six Sigma training, have partnered to become the single-source global provider of Six Sigma training for a top three computer manufacturer.

This deal marks a significant first step in the new partnership between Global Productivity Solutions and MoreSteam. The complementary strengths of the two companies – MoreSteam’s proven e-learning capability and technologies and Global Productivity Solutions’ track record of delivering Operational Excellence® – played a critical role in their joint selection. As implementation begins, Global Productivity Solutions will provide the onsite training and support in conjunction with MoreSteam’s e-Learning, classroom simulations, and project tracking capabilities.

Another key factor in the computer manufacturer’s decision to select MoreSteam and GPS was the international reach of both firms. MoreSteam’s robust multiple language options allow the client to remove the language barrier so that employees experience a consistent, high-quality and practice-based process improvement curriculum. With its offices in in seven countries, Global Productivity Solutions has a proven history of providing native-language services in virtually every corner of the globe.

Aimée Cowher, GPS Founding Partner and CEO, believes that the collaboration was a key factor in the selection of GPS and MoreSteam from a very diverse field of competitors. “Our partnership with MoreSteam strengthens our capability to better serve our customers, allowing us to focus our resources on what we do best by incorporating what MoreSteam does best. Our partnership offers a powerful combination of training and implementation that drives organizational capability and delivers rapid, sustainable results,” said Cowher.

About Global Productivity Solutions
Global Productivity Solutions is an industry leading operations consulting firm helping companies achieve Operational Excellence® across the supply chain from innovation to invoicing. Engaging all levels of the organization from the C-suite to the shop floor, we challenge what’s possible to make assets more reliable, systems more efficient and processes more capable. Our growing team of 100+ highly experienced consultants can support you around the globe to help you achieve higher operating incomes, increase working capital and enable profitable growth.

About MoreSteam.com®
MoreSteam.com is the leading global provider of online Lean Six Sigma training and Blended Learning technology, serving over 2,000 corporate clients and over 50% of the Fortune 500 with a full suite of Lean Six Sigma e-Learning courses, data analysis software, discrete event simulation software, online project tracking software, online testing tools, and project simulations and games. MoreSteam.com was launched in the year 2000 in response to the high cost of traditional Six Sigma training and tools, and has now trained over 400,000 Lean Six Sigma professionals. MoreSteam’s mission is to enable people to advance the performance of their organizations by delivering powerful tools for process improvement to the widest possible audience at the lowest price available. For more information about MoreSteam.com, visit: http://www.moresteam.com/.

[xyz_lbx_default_code]