"We have already had one investor for $25K, and another who is very involved in the food business, who could be a funder on a much larger level. So we are very pleased, and offer our thanks."
Posted on May 31, 2016 @ 07:58:00 AM by Paul Meagher
Isaac Newton was one of the greatist physicists ever. He also spent alot time engaged in alchemy, the attempt to turn a "base metal" into a "noble metal", particulary gold. He never had much success
in this venture and one wonders how he reconciled his hard-nosed physics and mathematics with his alchemical pursuits.
Lately I've been engaged in a form of alchemy that involves the conversion of old-rotted fence posts into functional and aesthetically pleasing hugelbed content and structure.
Hugelkultur is a type of raised bed popularized by the premier Austrian permaculturist
Sepp Holzer. It generally consists of digging a trench and laying down large woody material followed by smaller woody material followed by non-woody material (e.g., hay in various states of decay) and finally the escavated soil mixed and topped with some compost. There are different ways to construct a hugelkultur bed and the version that we implemented was derived from Jenni Blackmore's book Permaculture: Abundant Living On Less Than An Acre (2015). I highly recommend this book for its self-deprecating humor, writing quality, and her ability to find simple and easy designs for building permaculture structures like hugelbeds, spiral gardens, etc...
This is what the hugelbed looked like as we began building it. Notice the rotted logs used for both the side supports and as humic content in a trench just below the sides of the hugelbed.
One way to look at certain forms of entrepreneurship is as a form of alchemy, a transmutation of base elements to noble elements. In modern lingo we might call this "adding value". Isaac Newton may have been pursuing an entrepreneurial quest when he tried to convert base metals into gold and his failure is perhaps a caution that even the greatest minds can fail to add value. On the other hand, we can be quite successful at times and the three hugelbeds we constructed from the old fences we tore down involved converting something I would have considered waste into something that will help grow veggies, herbs and strawberries well into the future. A common slogan in permaculture is that the "problem is the solution" and in this case the problem of old rotted wood was the solution to finding content and structure for my hugelbeds. Adding value may not simply be about making something good even better, it can also be about turning a problem into the solution to a different problem. There is alot of alchemical thinking involved.
Posted on May 27, 2016 @ 12:46:00 PM by Paul Meagher
In a recent blog I promoted the book Grit as a timely book to consider reading (it is currently a best-seller). Already there is starting to be
some controversery over the power of grit to explain success. This article also discusses criticism of Malcolm Gladwell's promotion of the "10,000 hours of practice" rule for achieving expertise as being too simplistic as well.
Grit and the "10,000 hours of practice" rule are related ideas that are worth considering together.
Why do you need Grit? Why does acquiring expert level performance require 10,000 hours? There are lots of answers to this question but one that appeals to me lately as a result of reading Kenneth Hammond's books is the idea that learning under conditions of uncertaintly is very difficult and requires alot of time and grit to achieve mastery. Our understanding of the world is mediated by multiple fallible indicators and knowing which indicators we need to attend to and how much weight to assign each indicator is sometime it might take a lifetime to master. If we perform action A (plant potatoes in hay) on one occasion and achieve great results (lots of potatoes) and then do it the next year and get very poor results, then you have to start modifying and complicating your understanding of hay as a growing medium for potatoes.
One way to frame why grit and 10,000 hours of practice are used to explain high-level performance is because what we are are awash in multiple fallible indicators that may simply require alot of persistence and practice to make sense of. If this is true, then perhaps we might not need so much grit and so many hours to achieve mastery if we know that our task is to make sense of multiple fallible indicators as they relate to, say, selling a product or investing in a stock. We need to put ourselves in a situation and mindframe that allows identify the indicators and assign the appropriates weights in our judgement and decision making.
If Grit is so important to achieving success it begs the question of why it is so important? One answer that I like is that it is important because achieving success requires mastery of the multiple fallible indicators associated with the domain we work within. There is no magic bullet to learning under conditions of uncertaintly. Success perhaps goes to those who don't get frustrated by this slow learning process. Maybe we can get by with a little less grit and a little less practice if we explicitly acknowledge and use Kenneth Hammond's multiple fallible indicators strategy (a.k.a the "correspondence strategy") for dealing with uncertaintly.
Posted on May 24, 2016 @ 07:35:00 AM by Paul Meagher
Lately I've been thinking about how to prioritize what to do first. The problem of prioritization arises, for example, in
trying to figure out what renovation would be most beneficial to an old house that we are gradually fixing up. There is a long list of things that could be done to improve the look and functioning of the house and often when I think one job should have the highest priority, some constraint or opportunity may present itself that reshuffles the list to make some job gain the higher priority. For example, if the highest priority job requires a certain skillset and that skillset is not available but another skillset is, then another task that utilizes the available skillset may become the highest priority.
Sometimes when we prioritize we don't assign a clear time frame for when the priority will be addressed. Sometimes this can be a weakness in our prioritization process as putting a time frame on the task might clarify whether the skillset required will be available then, and if not, cause us to reshuffle our priorities for a particular time frame. Prioritization is, afterall, something we must engage in every day and often on an hour-by-hour basis, so putting a time frame on our task list might help us be more realistic about whether the task can be accomplished within the time frame.
These days prioritization feels more like a process of working within the constraints of labor, weather, skillsets, safety and what needs to be done. Dealing with the flush of spring growth around the farm is a dance with nature, how many helpers I have, what their skillsets are, whether they can do the work safely and what needs to be done. I have a list of tasks that I would like to see done in the next week, and some have a high priority, but if the weather is not cooperating then obviously I need to figure out what is the next best use of everyone's time. What is the next best use of everyone's time is often not simply a matter of what I want done, but also a negotiation with the helpers so as to ensure high levels of motivation to work on the jobs.
The idea that prioritization involves coming up with a rank-ordered task list represents only part of the prioritization process. The relationship between the task list order and when those tasks get done is quite dynamic and is sensitive to the constraints in effect. The selection of tasks is driven by constraints with the rank order of tasks being one of the constraints determining what gets done on a day-to-day basis.
Today we will be mowing grass to reduce competition between vines/trees and the grass. I wish I didn't have to prioritize this job but if I want to achieve my goal of growing grape vines and apple trees, this is part of the price of doing so. If I changed my goals to focus on growing vegetables instead, then grass would not be as much of a concern and would be something I might only do once a year to make hay. So we have priorities, constraints, and goals and from this brew we decide what to do on a day-to-day basis.
There are suggestions out there on how to prioritize what to do next and the book Simple Rules offers some useful suggestions in terms of tasks will "move the needle". Some companies devise a set of simple rules, perhaps 3 or 4 in number, that help guide the prioritization process towards "moving the needle" on a companies' growth. It seems to me, however, that the task list is like a business plan that can change substantially once we start to execute it and react to the constraints that we find ourselves in. How much weight should we give to the task list and how much weight should be given to the active set of constraints we are working within? Is there evidence one way or the other that companies acting according to a rigid and well defined set of priorities does better than one that is more reactive to opportunities and constraints?
I don't have any clear answers which is why I'm thinking and blogging about this issue. I do think that that prioritization involves working with constraints and that we might approach priorization decision making as more typically involving constaint-driven problem solving than task-list planning. The problem of what to do next often feels like it is completely determined by the current constraints (which includes the task list as one component). To say that I decided to do this work today because it was high on my task list is often just a nice story we tell ourselves when the reality is that the task list played a relatively minor role in comparison with the set of constaints that were active or acknowledged on that particular day. Prioritization decision
making is more of a quasi-rational process than a fully analytical process of deciding what to do next.
The book has great reviews and Angela is a recipient of the prestigeous McArther Fellowship, often referred to as the "genius grant". The book discusses the relative importance of talent versus grit in achieving success. Talent as measured by SAT scores, or other means, only correlates with success if accompanied by grit (passion and perserverance) and you can achieve success even if you don't exhibit alot of talent but have grit. The book advises that you not be so impressed by raw talent because that raw talent may not lead anywhere without grit. The book is a useful examination of what grit is, how it is measured, and what it correlates with.
Angela did a recent talk at Google to promote her book and you might want to watch it if you are unsure whether you want to read the book.
I'm only a couple of chapters into the book so far so can't provide a full review here, just a recommendation based on what I've read so far and the buzz that is accompanying the book launch (e.g., Google usually only invites higher profile book authors to present and the views on her talk so far are higher than most Google Talks). Given the relationship between grit and success, I think most entrepreneurs might want to examine this relationship in more depth for motivation and insight into the "science of success".
Posted on May 16, 2016 @ 09:29:00 AM by Paul Meagher
I posted a number of recent blogs on the lens model. When you need to make a judgment about something that involves uncertainty (e.g., whether to invest in a startup), you can construct a lens model to represent the indicators you think are relevant to making that decision (e.g., good team, good business plan, meeting milestones, rate of new customers, potentially profitable, etc...). You can construct a lens model for many different types of judgments.
In today's blog I want to talk about a tool you can use to create a lens model diagram. The tool is a free opensource piece of software called Graphviz. You can download Graphviz to your computer and generate high quality graph visualizations, or you can paste your graph recipes into this online tool for generating graphs. Note that the online graphs are lower resolution that what you get if you download the software.
Graphviz is software for visualing graphs that are specified using the DOT language. The DOT language only includes a few features so it is quick to get started with. Here is a recipe you can use to get started with making your own lens diagrams.
graph {
rankdir=LR;
world -- indicator1[label="0.6",penwidth=3];
world -- indicator2[label="0.3",penwidth=2];
world -- indicator3[label="0.1",penwidth=1];
indicator1 -- judge[label="0.6",penwidth=3];
indicator2 -- judge[label="0.3",penwidth=2];
indicator3 -- judge[label="0.1",penwidth=1];
judge -- world[label="Accuracy"];
}
When I paste this graph recipe into the GVEdit program that comes with the Graphviz software, and hit the "Run" button, it generates this generic lens diagram:
I could, for example, modify this graph for a salary estimation problem.
Say I wanted to estimate the salary of a professor in a university setting. I could identify several salary indicators such as number of publications, professorial level, teacher rating, age and other factors, and use these indicators to construct an estimate which could then be compared to the professors actual salaries to assess accuracy. It might be some such formula that is used to actually determine the professors salaries and I could be more or less calibrated to the factors involved depending on the indicators I used and whether I included and weighted each indicator properly. In the diagram above, the cue utilization validities (numbers appearing above the lines from Indicators to Judge) exactly correspond to the ecological validities of the indicators (numbers appearing above the lines from World to Indicators) which is what perfectly calibrated judgment would look like.
Researchers have been using the lens model to study judgment and decision making for over a half-century now. Lens models are probably not used as much as they could be as a strategy to deal with uncertainty. If you want to use them then you might want to have a tool that allows you to easily construct a lens model for whatever judgment or decision problem interests you. The DOT language and the Graphviz software allow you to quickly construct a lens model of that judgment problem if you so desire.
Posted on May 11, 2016 @ 10:25:00 AM by Paul Meagher
I was browsing a recent article by NVIDIA researchers called End to End Learning for Self-Driving Cars and was intrigued
by the simple metric they were using to evaluate the performance of their self-driving algorithm. The metric is called "Autonomy" and is based upon measuring how much time the driver spends intervening to correct driving performance over a given amount of driving time. "Autonomy" is measured with the following simple formula:
The inverse of the "Autonomy" score might be called the "Helping" score.
So perhaps in self-driving vehicles in the future they will be rated by their degree of autonomy or perhaps they will operate like an advanced form of cruise control where we have to potentially intervene every so often and, as such, we will want to keep track of our vehicles current autonomy score. Maybe on icy roads the autonomy score goes way down but driving through the prairies on a nice day it is at 100% like it is in this screenshot of the NVIDIA dashboard.
Autonomy in the workplace is measured by researchers in terms of a worker's freedom to schedule and choose their work. Not surprisingly, they have discovered that workers with more autonomy have more job satisfaction. Personally, I would be more interested in measuring worker autonomy in the way that they measure autonomy in self-driving cars so that we might have answers to questions about the variance in worker autonomy as a function of various factors (e.g., individual differences, time on job, type of work, autonomy scores in the past, etc...). Likewise, we might study child development using an autonomy metric like this. Take a task and ask your kids to do it and, depending on age or other factors, you might find yourself intervening to help with such frequency and duration that you wonder why you asked them to help you in the first place. Other kids might prefer to try it themselves and don't ask for much help. Are there individual differences in preferences for working autonomously? Does it correlate with measures of introversion/extroversion?
I think these self-driving car researchers are onto something with this autonomy score. It offers a nice simple formula we can use to reason about an important aspect of worker performance; namely, the amount of intervention that is required to help them carry out a work assignment.
Incidentally, you might wonder why graphics chip manufacturer NVIDIA is doing research on self-driving cars. The popular "deep learning" technique they are using for their algorithm relies critically on the parallel processing that Graphics Processing Units (GPU's) provide. The popularity of deep learning is good news for NVIDIA and it looks like they are not content to just manufacture more hardware for deep learning algorithms but also want to play a role designing some of the deep learning algorithms that will run on their chips. Deep learning algorithms harness the power of many GPUs which NVIDIA has no shortage of.
Posted on May 10, 2016 @ 12:49:00 PM by Paul Meagher
Today I am using a strirrup hoe to remove some weeds from my suburban garden beds so I can get them ready for planting. We are still at risk of frost so I have to be careful about what seeds I plant out first for a salad mix. Fortunately, the Urban Farmer, Curtis Stone, has some good advice on salad mix varieties to plant out first (e.g., Red Russian Kale, Arugula,Spinach, Mustard Greens, Tatsoi, Beet Greens) and why Salad Mixes are an important product for his urban farming business.
As I prepare to enter into my vegetable growing season, I'm sure I will be checking out Curtis Stone's YouTube Channel for timely vegetable growing advice and inspiration. Curtis Stone is an innovative and entrepreneurial urban farmer so watching his videos satisfies my desire for high quality veggie-growing and business-growing content.
Posted on May 6, 2016 @ 09:35:00 PM by Paul Meagher
I never really gave much thought to the practical importance of the philosophical distinction between correspondence and coherence theories of truth until I read Kenneth Hammond's book Human Judgment and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice (1996). It turns out that in research on judgment and decision making the distinction is very important because it defines what researchers consider "good" or "correct" judgment and decision making. For someone subscribing to a coherence theory of truth, the truth of a statement is determined by how well it fits with other things we take to be true, such as probability theory.
Nobel Laureate Daniel Kahneman in his best-selling book Thinking, Fast and Slow (2011) discusses a variety of experiments that purport to demonstrate how poorly humans often reason because their reasoning does not accord with the rules of probability theory. The experiments demonstrate many different types of biases (anchoring, framing, availability, recency, etc...) that human reasoning is subject to based on their disagreement with the rules of probability theory.
Professor Kenneth Hammond, and before him, his mentor Egon Brunswik, were not big fans of the coherence theory of truth. They preferred a correspondence theory where the truth of a statement is determined by whether it corresponds to the facts. They believed that our access to the facts was often mediated by multiple fallible indicators.
We may not be able to verbalize some of the indicators we use in our judgment, or how we are combining them, but our intuitive understanding can lead to accurate judgments about the world even if we don't have a fully coherent account of why we believe what we do. Often the judgment rule turns out to be a simple linear model that combines information from multiple fallible indicators. Experiments in this tradition involve people making judgments about states of the world based upon indicator information and examining the accuracy of their judgments, the ecological validity of the indicators, and whether judges utilize the indicator in a way that corresponds to its ecological validity.
So how you conduct and interpret experiments in judgment and decision making are affected by whether you believe correspondence theories of truth are superior to coherence theories of truth and vice versa. They are metatheories that determine the specific theories we come up with and how we study them.
The distinction is relevant to entrepreneurship. For example, a business plan is arguably a document designed to present a coherent account of why the business will succeed. If you've ever questioned the value of a business plan it could be because it is a document that is judged based upon coherence criteria but the actual success of the startup will depend upon whether the startup's value hypothesis and growth engine hypothesis corresponds with reality. Eric Ries, in his best selling and influential book, The Lean Startup (2011) discussed many techniques for validating these two hypothesis. Although he does not discuss the correspondence theory of truth as his metatheory, it is pretty obvious he subscribes to it.
In practice, the correspondence theory of truth often involves defining and measuring indicators and making decisions based on these indicators. In the lean startup, Eric Ries advocates looking for indicators to prove that your value hypothesis is true. If the measured indicators don't prove out your value hypothesis you many need to start pivoting until you find a value hypothesis that appears correct according to the numbers. If your value hypothesis looks good, then you will need to validate your growth hypothesis by defining and measuring key performance indicators for growth. The lean startup approach is very experiment and measurement driven because it is a search for correspondence between the value and growth hypothesis and reality.
This diagram should actually be two lens models, one for the value hypothesis and one for the growth hypothesis. I'm being lazy. The lens model for the value hypothesis asks what indicators can we use to measure whether our product or service delivers the value that we claim it does. The lens model for the growth hypothesis asks what indicators we can use to measure whether our growth engine is working. You should read the book if you want examples of how indicators of value and growth were defined, measured and used in the various startups discussed.
One reason why the lean startup theory is useful is because success in starting a business is defined more in terms of correspondence with reality than coherence with other beliefs that we might hold to be true. There are lots of situations where the coherence theory of truth might be useful, such as narratives about the meaning of life and social interactions where truth is a matter a perception and plausible story telling, but that does not get you very far if you are a startup or running a business. Correspondence is king.
If correspondence is king, you might find the lean startup lens model above offers a simple visualization that can be used to remind you of how accurate judgments regarding the value and growth hypothesis for startups are achieved.
Posted on May 4, 2016 @ 09:49:00 AM by Paul Meagher
Two topics that I like to blog about are lens models and decision trees. Today I want to offer up suggestions for how lens models might be constructed from decision trees.
Recall that a lens model looks something like this (taken from this blog):
Recall also that a fully specified decision tree looks something like this (taken from this blog):
Notice that the decision tree includes two factors: how much nitrogen to apply (100k, 160k or 240k per acre) and quality of the growing season (poor, average, good). In the context of a lens model, these might be viewed as indicators of what the yield might be at the end of growing season. In other words, if the "intangible state" we are trying judge is the amount of corn we will get at the end of a growing season, then two critical indicators are how much nitrogen is applied and what the quality of the growing season will be like (which in turn might be indicated by the amount of rain). We have control over one of those indicators (how much nitrogen to apply) but not the other (what the weather will be like). The main point I want to make here is that it is relatively easy to convert a decision tree to a lens model by making each factor in your decision tree an indicator in your lens model.
I don't want to get into the technical details of how decisions tree algorithms work but in general they work by recording various "features" that are associated with a target outcome you are interested in. For example, if you want to make a decision about whether a c-section will be required to deliver a baby, you can look at all the c-section births and all the non c-section births and record standardized information about all those cases. Then you start looking for the best feature that discriminates between c-section and non c-section births. That feature will likely not be a perfect discriminator so you take all the remaining cases where you used the best feature to sort cases and then use the next best feature to discriminate between cases that require c-section births and non c-section births. If you do this you come up with the decision tree shown below which can be captured more simply in an if-then rule which is also shown below:
We can construct a lens model from this tree, or from the in-then rule, where each of the three factors is an indicator in our lens model. If we use the thickness of the line connecting the judge to the indicator to represent the strength of the relationship, the first indicator would have a thicker line than the second indicator which would be thicker than the third indicator. The first indicator captures the most variance followed by the second followed by the third. This is how algorithms that generate decision trees work so when we construct lens models based on them, we should expect them to have a certain form.
The point of this blog is to show that there are several formal techniques we might use to generate a lens model. Multiple linear regression is one previously discussed technique. Today I discussed the use of decision tree algorithms as another technique. A decision tree algorithm also suggests a plausible psychological strategy for coming up with indicators; namely, pick an indicator that accounts for most of the target cases. If there are some cases it doesn't handle, pick another indicator that might filter out the more of the cases it doesn't handle, and so on. You might not have to use many indicators before you arrive at a set of indicators that captures enough of the data to satisfy you.
Multiple linear regression and decision tree algorithms are two formal techniques you can use to make the indicators used in judgement explicit and which offer up concrete approaches to thinking about how common sense, which we often find difficult to explain, might work and be improved upon. Doctors making decisions about c-sections might have relied upon common sense which included consideration of the factors studied but the formal techniques helped to identify the relevant indicators and the overall strength of the relationship between the indicators and the need for a c-section. Where multiple regression is a more wholistic/parallel method of finding indicators, decision tree learning algorithms strike me as a more analytic/sequential method of finding judgement indicators.
Below is a lecture by machine learning guru Tom Mitchell on decision tree learning that is set to start with him discussing the c-section example.
The concepts of risk and uncertainty are central to the current definition of a startup and what it means to be an entrepreneur. Risk and uncertainty, however, are not the same. If you have no basis in prior experience or logic to measure the degree of uncertainty involved in a situation, then you can't assign a level of risk to it. Insurance companies and banks deal with uncertainty in the form of risk whereas startups and entrepreneurs often deal with unquantifiable uncertainty. The distinction between risk and uncertainty gives us a way to think about how entrepreneurial companies might evolve over time from uncertainty-based ventures to risk-based ventures and possibly back to uncertainty-based ventures if they decide to reinvent themselves. Here is a passage from Professor Leyden's article that discusses the relevance of the distinction:
Although the entrepreneurship literature has increasingly come to accept Knight's view that entrepreneurial action takes place under conditions of uncertainty, that view is far from universal. For those who take the view that the entrepreneur lives in a world of (quantifiable) risk, it may be reasonable to think of entrepreneurial opportunities as objective phenomenon waiting to be discovered, albeit with risk. But under an uncertainty-based view, entrepreneurs do not so much discover profit opportunities as create them. As Alvarez and Barney note, such creativity is the result of the entrepreneurs' own organizing efforts in the face of uncertainty. However, because the condition of uncertainty may change over time, the bases for organizing entrepreneurial firms are also likely to change. As a result, entrepreneurial (that is, uncertainty-based) firms over time may be transformed into non-entrepreneurial (that is, risk-based) firms once the probability distribution of outcomes associated with uncertain exchanges is learned through experience. Based on this reasoning, Schumpeter's notion of creative destruction can be thought of as including not just the replacement of older firms by newer firms, but also the transformation of entrepreneurial firms into non-entrepreneurial firms over time. Such transformations, which Schumpeter saw as common, imply a continual need for new (or reinvented) firms that, through their decision to be entrepreneurial, enter willingly into a world of uncertainty and creativity. ~ p 72
The valuation of a startup may also be based upon whether its prospects are completely uncertain or whether it has received enough feedback that it can start to quantify some of its operational uncertainty. For example, when a startup first starts to market its products there may be no clear relationship between its marketing efforts and the number of new customers that effort produces. After awhile, however, it might start to see that certain types of marketing produces better results than other types of marketing. Once it can start to quantify some marketing relationships (even if the relationship still has alot of variability) it can leverage that risk information with investors to potentially achieve a higher valuation. Investors are generally more comfortable dealing with risk than uncertainty and may be inclined to agree with higher valuations when there is less uncertainty involved.
Notice: The New York Investment Network is owned by
Dealfow Solutions Ltd. The New York Investment Network is part
of a network of sites, the Dealflow Investment Network, that provides a platform
for startups and existing businesses to connect with a combined pool of potential
funders. Dealflow Solutions Ltd. is not a registered broker or dealer and
does not offer investment advice or advice on the raising of capital. The
New York Investment Network does not provide direct funding or make any
recommendations or suggestions to an investor to invest in a particular company.
Nothing on this website should be construed as an offer to sell, a solicitation of an
offer to buy, or a recommendation for any security by Dealflow Solutons Ltd.
or any third party. Dealflow Solutions Ltd. does not take part in the negotiations
or execution of any transaction or deal.
The New York Investment Network does not purchase, sell, negotiate,
execute, take possession or is compensated by securities in any way, or at any time,
nor is it permitted through our platform. We are not an equity crowdfunding platform
or portal. Entrepreneurs and Accredited Investors who wish to use the New York Investment Network
are hereby warned that engaging in private fundraising and funding activities can expose you to
a high risk of fraud, monetary loss, and regulatory scrutiny and to proceed with caution
and professional guidance at all times.