<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Principles Archives : Predictive Modeler</title>
	<atom:link href="https://predictivemodeler.com/category/book/principles/feed/" rel="self" type="application/rss+xml" />
	<link>https://predictivemodeler.com/category/book/principles/</link>
	<description></description>
	<lastBuildDate>Sun, 20 Oct 2019 05:16:10 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.6.1</generator>

 
	<item>
		<title>Classical vs Machine</title>
		<link>https://predictivemodeler.com/2019/10/15/classical-vs-machine/</link>
					<comments>https://predictivemodeler.com/2019/10/15/classical-vs-machine/#respond</comments>
		
		<dc:creator><![CDATA[Syed Mehmud]]></dc:creator>
		<pubDate>Wed, 16 Oct 2019 03:24:55 +0000</pubDate>
				<category><![CDATA[Principles]]></category>
		<guid isPermaLink="false">https://predictivemodeler.com/?p=2708</guid>

					<description><![CDATA[<p>There is a reason why statistical texts spill copious amounts of ink on data sampling and survey designs. Historically speaking, data was incredibly laborious to collect and to analyze with limited computational resources. This led to the development of mathematical modeling techniques that relied on small amounts of information. These models are simplistic and replete [&#8230;]</p>
<p>The post <a href="https://predictivemodeler.com/2019/10/15/classical-vs-machine/">Classical vs Machine</a> appeared first on <a href="https://predictivemodeler.com">Predictive Modeler</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p style="text-align: justify;">There is a reason why statistical texts spill copious amounts of ink on data sampling and survey designs. Historically speaking, data was incredibly laborious to collect and to analyze with limited computational resources. This led to the development of mathematical modeling techniques that relied on small amounts of information. These models are simplistic and replete with assumptions in order to make the math tractable. The paucity of data and the forcing of simplifying assumptions meant sacrificing significant predictive power. Well, not really a <em>sacrifice</em> as there was no alternative!</p>
<p style="text-align: justify;">Nowadays we are awash in data. Easy access to data and to huge amounts of computational power is completely changing our relationship with data and our approach to mathematical modeling. Generalizing from small amounts of data required making careful and tedious assumptions about model behavior. That is no longer needed, nor is the tractability of mathematical modeling all that <span class="tooltips " style="" title="This is likely a controversial view!"><span style="color: #000080;">important</span></span>. What is all-important is what has been so all along &#8211; <em>how useful is the model</em>? Whether the model can be found in an old statistical textbooks is not only irrelevant, it can chain innovative thinking. The only affirmation of a model is its performance, not its pedigree.</p>
<p style="text-align: justify;">I expect that these views will not be welcomed by all, particularly the purists who believe in the frequentist/bayesian paradigms that have dominated historical work with predictive modeling. I believe that with the data available today to build and test models &#8211; it is irrelevant whether the modeling is &#8216;sound&#8217; based on some arcane definition, or whether it can be interpreted in some academically narrow or sanctioned way. What is relevant is whether the model facilitates some useful outcome, and that it does so in a way that is robust to new data.</p>
<p style="text-align: justify;">To be clear, in some applications the transparency or <em>interpretability</em> of a model is a valid concern &#8211; and we make performance trade-offs to accommodate those features. I suspect that with the vast new neural nets that are being designed and built for ubiquitous applications (e.g. organizing our photographs, reading diagnostic charts, etc.) &#8211; whether a human understands the model or not will be an increasingly marginal concern in the future.</p>
<p>The post <a href="https://predictivemodeler.com/2019/10/15/classical-vs-machine/">Classical vs Machine</a> appeared first on <a href="https://predictivemodeler.com">Predictive Modeler</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://predictivemodeler.com/2019/10/15/classical-vs-machine/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The Notion of Predictability</title>
		<link>https://predictivemodeler.com/2019/10/12/the-notion-of-predictability/</link>
					<comments>https://predictivemodeler.com/2019/10/12/the-notion-of-predictability/#respond</comments>
		
		<dc:creator><![CDATA[Syed Mehmud]]></dc:creator>
		<pubDate>Sat, 12 Oct 2019 14:38:45 +0000</pubDate>
				<category><![CDATA[Principles]]></category>
		<category><![CDATA[Predictability]]></category>
		<guid isPermaLink="false">https://predictivemodeler.com/?p=2703</guid>

					<description><![CDATA[<p>This book and whole professions (e.g. predictive modelers, data scientists, economists, etc.) hinge on the assumption that the future can be predicted to some degree. But can it? The Newtonian revolution reinforced not only physical, but a causal determinism about our reality. The past and the future are inextricably linked. We can predict the motion [&#8230;]</p>
<p>The post <a href="https://predictivemodeler.com/2019/10/12/the-notion-of-predictability/">The Notion of Predictability</a> appeared first on <a href="https://predictivemodeler.com">Predictive Modeler</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>This book and whole professions (e.g. predictive modelers, data scientists, economists, etc.) hinge on the assumption that the future can be predicted to some degree. But can it?</p>
<p>The Newtonian revolution reinforced not only physical, but a causal determinism about our reality. The past and the future are inextricably linked. We can predict the motion of objects, small and heavenly, based on where they have been and the (constant) forces acting upon them. There grew a sense that if we could figure out the equation(s) of all the forces and the precise current state of each atom that composes our reality &#8211; we might be able predict everything.</p>
<p>Heisenberg spoiled the party suggesting that the fabric of reality is such that we can <em>never</em> know everything about the <span class="tooltips " style="" title="For example, we can only know the location or the speed - but not both!">present</span> let alone the future. But there was a saving grace. While we don&#8217;t know what each atom is up to, bunches of them behave in probabilistically predictable ways &#8211; and the most successful theory in the physical sciences, <em>quantum electrodynamics</em>, was born.</p>
<p>Let me get back from my nerdish tangent to the question at hand. There is a common refrain that the past is not prologue. Just because something happened in the past does not mean that it would happen in the future. The honest truth is that we just don&#8217;t know. We think the sun will rise tomorrow as it has ever since the dawn of man &#8211; but it could be swallowed by a rogue blackhole overnight!</p>
<p>Uncertainty does not meant that we give up, and stop trying to learn new physics or using mathematical modeling to predict the future. It just means that we recognize the limitations of our processes and that we are going to have to make iterative improvements to our modeling whenever the future refuses to obey our abstractions of it &#8211; which it predictably will.</p>
<p>The post <a href="https://predictivemodeler.com/2019/10/12/the-notion-of-predictability/">The Notion of Predictability</a> appeared first on <a href="https://predictivemodeler.com">Predictive Modeler</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://predictivemodeler.com/2019/10/12/the-notion-of-predictability/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What is a model?</title>
		<link>https://predictivemodeler.com/2019/10/12/what-is-a-model/</link>
					<comments>https://predictivemodeler.com/2019/10/12/what-is-a-model/#respond</comments>
		
		<dc:creator><![CDATA[Syed Mehmud]]></dc:creator>
		<pubDate>Sat, 12 Oct 2019 14:14:35 +0000</pubDate>
				<category><![CDATA[Principles]]></category>
		<category><![CDATA[Model]]></category>
		<guid isPermaLink="false">https://predictivemodeler.com/?p=2700</guid>

					<description><![CDATA[<p>A model in the context of predictive modeling is simply some useful mathematical abstraction of reality. A model should be (a lot) less complex than the reality it is intended to represent. This might seem obvious, but with ever increasing computation at our finger tips we need to carefully assess the relative complexity of our [&#8230;]</p>
<p>The post <a href="https://predictivemodeler.com/2019/10/12/what-is-a-model/">What is a model?</a> appeared first on <a href="https://predictivemodeler.com">Predictive Modeler</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>A model in the context of <em>predictive modeling </em>is simply some useful mathematical abstraction of reality. A model should be (a lot) less complex than the reality it is intended to represent. This might seem obvious, but with ever increasing computation at our finger tips we need to carefully assess the relative complexity of our model and its subject reality.</p>
<p>The construction of a model starts with a series of assumptions and an objective. The objective could be understanding a real-world process better, explaining what happened in the past or to make a prediction about the future. The assumptions include the set of data that we think influences the outcome, and what sort of relationship (e.g. linear) does that data have with our objective. Those are the big ones anyway &#8211; then there are a whole bunch of little ones along the way. It is important to document assumptions and think of ways to test them as you are building your model.</p>
<p>Another thing that is often times overlooked is that a model is not simply a mathematical formula. Unless your audience has a 100% of the context you hold in your head (rare!) &#8211; a model includes the mathematical bits (e.g. code, excel, wherever else the math resides) but also the documentation including the assumptions. Further, it is always nice to have an executive summary/overview to make it easier for your audience to consume your work.</p>
<p>The post <a href="https://predictivemodeler.com/2019/10/12/what-is-a-model/">What is a model?</a> appeared first on <a href="https://predictivemodeler.com">Predictive Modeler</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://predictivemodeler.com/2019/10/12/what-is-a-model/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Principle of Parsimony</title>
		<link>https://predictivemodeler.com/2019/09/29/principle-of-parsimony/</link>
					<comments>https://predictivemodeler.com/2019/09/29/principle-of-parsimony/#respond</comments>
		
		<dc:creator><![CDATA[Syed Mehmud]]></dc:creator>
		<pubDate>Sun, 29 Sep 2019 15:17:22 +0000</pubDate>
				<category><![CDATA[Principles]]></category>
		<category><![CDATA[Principle of Parsimony]]></category>
		<guid isPermaLink="false">https://predictivemodeler.com/?p=2687</guid>

					<description><![CDATA[<p>You may have heard the phrase &#8216;less is more&#8216;. I believe it applies well to predictive modeling and programming in general. Perhaps it also applies to transmittal of information more generally. As Mark Twain once remarked: I did not have time to write a short letter, so I wrote a long one instead. Even in [&#8230;]</p>
<p>The post <a href="https://predictivemodeler.com/2019/09/29/principle-of-parsimony/">Principle of Parsimony</a> appeared first on <a href="https://predictivemodeler.com">Predictive Modeler</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p style="text-align: justify;">You may have heard the phrase &#8216;<em>less is more</em>&#8216;. I believe it applies well to predictive modeling and programming in general. Perhaps it also applies to transmittal of information more generally. As Mark Twain once remarked: <em>I did not have time to write a short letter, so I wrote a long one instead</em>.</p>
<p style="text-align: justify;">Even in the sciences, compact equations (<em>e=mc<sup>2</sup></em>) are elegant and seem to demonstrate that we have understood something about our reality in a profound way. Larger, messier representations of reality convey a feeling that more work needs to be done to distill out irrelevant details and get to a deeper state of understanding.</p>
<p style="text-align: justify;">Predictive modeling is no exception in my opinion. More complex models are <span style="text-decoration: underline;"><strong>not</strong></span> &#8216;better&#8217;. In fact, they are sub-optimal if simpler models with fewer assumptions are possible with little or no loss of performance. Most of the time one would (and should) trade-off some performance for simplicity. And by the way, this does not apply to the final model form (e.g. 10 predictors instead of 5), but also applies to modeling techniques (e.g. black-box methods vs. transparent and easier to understand approaches).</p>
<p style="text-align: justify;">This is 2019 &#8211; and our predictive modeling efforts are still for review and use by other humans. When the time comes when our algorithms are for the sole consumption of our machine overlords, you can chuck the principle of parsimony and be as opaque as you please. Until that time, <strong>keep focusing relentlessly on simplifying your methods and models to the point where degradation in performance is significant relative to your application.</strong> You will build better models, communicate them more effectively, and increase the odds that you have understood something real about the world.</p>
<p>The post <a href="https://predictivemodeler.com/2019/09/29/principle-of-parsimony/">Principle of Parsimony</a> appeared first on <a href="https://predictivemodeler.com">Predictive Modeler</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://predictivemodeler.com/2019/09/29/principle-of-parsimony/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Quantum Computing</title>
		<link>https://predictivemodeler.com/2019/02/24/quantum-computing/</link>
					<comments>https://predictivemodeler.com/2019/02/24/quantum-computing/#respond</comments>
		
		<dc:creator><![CDATA[Syed Mehmud]]></dc:creator>
		<pubDate>Sun, 24 Feb 2019 15:29:16 +0000</pubDate>
				<category><![CDATA[Quantum Computing]]></category>
		<guid isPermaLink="false">https://predictivemodeler.com/?p=2590</guid>

					<description><![CDATA[<p>Something exciting is happening in the world of computing. And the excitement couldn&#8217;t come soon enough for those grieving the demise of Moore&#8217;s Law. Gordon Moore, one of the founders of Intel, observed that the number of transistors in dense integrated circuit doubles every two years. For the last fifty years, advancements in computing have [&#8230;]</p>
<p>The post <a href="https://predictivemodeler.com/2019/02/24/quantum-computing/">Quantum Computing</a> appeared first on <a href="https://predictivemodeler.com">Predictive Modeler</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Something exciting is happening in the world of computing. And the excitement couldn&#8217;t come soon enough for those grieving the demise of Moore&#8217;s Law. Gordon Moore, one of the founders of Intel, observed that the number of transistors in dense integrated circuit doubles every two years.</p>
<p>For the last fifty years, advancements in computing have been about making transistors smaller. Let me take a step back. Most of us know that the computers speak a <em>different</em> language. The language of bits (i.e. 0 and 1). But if you open up a computer and sort through its innards, you don&#8217;t find any 1&#8217;s and 0&#8217;s lying around. You see chips, and these chips contain transistors. Transistors either allow electrons to pass or not depending upon the voltage that is applied. A bit can be built from an arrangement of a couple or more transistors. Eight bits make a byte. And many, many bytes make up the internet, apps, software, these words you are reading, etc.</p>
<p>Back to Moore&#8217;s Law. Its dying because it&#8217;s getting harder to make transistors any smaller. They are getting so small now that engineers have to worry about <em>quantum tunneling</em> effects<sup class="modern-footnotes-footnote ">1</sup>. In this effect electrons are essentially and probabilistically walking through walls, making a mockery of our efforts to control their flow that is critical to the operation of a transistor!</p>
<p>Intuitively, the laws of reality should be the same whether something is big or small. Whether we are holding a grain of sand or an apple, gravity pulls both down when we release them. However, our intuition about reality is based on our experiences with matter containing trillions and trillions of atoms. A single grain of sand has 10<sup>20</sup> molecules. If each molecule was a human, a grain of sand would represent enough humans to populate 14 billion Earths!  But reality does not care about our intuition, and the world of the extremely tiny, the quantum world, is <em>bizarre</em>!</p>
<p>In an Aikido-like move, we have figured out how to take the very thing that is killing rapid advancement in transistor-based computing and turn it into a significant advantage. We can utilize the bizarre concepts of Quantum <em>superposition</em> and <em>entanglement</em> to solve certain problems that are intractable for our current computers. Such problems include optimization (e.g. traveling salesman problem), simulating molecular interactions in chemistry, the <em>N</em>-body problem in physics, etc.</p>
<p>Computational tasks can be classified as &#8220;P&#8221; and &#8220;NP&#8221;. &#8220;P&#8221; (or Polynomial-time) tasks are those that current computers can solve quickly <sup class="modern-footnotes-footnote ">2</sup>. &#8220;NP&#8221; (or Non-Polynomial time) tasks cannot be solved by current machines but they can quickly verify a solution to the problem. A quantum computer holds the promise to quickly solve &#8220;NP&#8221; problems. One example is being able to break encryption protocols quickly. Given how many passwords each of us has for everything from our bank accounts to Netflix, needless to say that it would be a very bad thing (but could also lead to good things, like impenetrably secure data transmission)!<sup class="modern-footnotes-footnote ">3</sup>
<p>A quantum computer excels at the <em>exponential scaling</em> of a computational problem. As long as a computational task can be <em>encoded</em> onto a quantum environment, we can observe the probabilistic results of that environment in order to get solutions to that problem.</p>
<p>This is a very different approach to solving a computational task. Our current method maps a task into a sequential operation of transistors comparing 0&#8217;s and 1&#8217;s in order to get to a final discrete state (i.e. one correct answer). For some problems, our best bet today is to search through the entire solution-space bit by bit<sup class="modern-footnotes-footnote ">4</sup> in order to find the right answer.</p>
<p>In a quantum computer you can encode the solution-space of the problem onto the massive number of states (or <em>phase</em>-space) accessible to a quantum system. A particle can act like a wave. When waves are <em>in phase</em> their amplitude increases and vice versa. We can utilize the principles of <em>interference</em> in order to amplify the correct answer and suppress all the incorrect ones. Nature itself reveals the solution to us.</p>
<p>Two quantum mechanical characteristics are particularly important to quantum computers:</p>
<h4>Superposition</h4>
<p>The basic unit of a quantum computer is a quantum-bit, q-bit or <em>qubit</em>. Whereas a bit can be 0 or 1, a qubit can be in a <em>superposed</em> state of 0 or 1. In fact, it is 0 <em>and</em> 1. A way to think about it is to imagine a sphere. Whereas a traditional bit is either the north pole (=1) or the south pole (=0), a qubit is anywhere on a longitude connecting the north and the south pole. When we <em>encode</em> information on the qubit, we give it a <em>phase</em> which is like a latitude rotating it from the longitude connecting the north and south pole.</p>
<p>Adding more transistors to our current machines leads to a linear increase in compute as it can only deal with one bit at a time. In a quantum computer the computational ability scales exponentially. Since qbits can be in a superposition of 1 and 0, <em>n</em> qbits can represent 2<em><sup>n</sup></em> states. Ten qbits can represent 1024 states. One hundred qbits represent 1.26 x 10<sup>30</sup> states!</p>
<h4>Entanglement</h4>
<p>The other important concept in quantum computing is entanglement. If we actually live in a simulation<sup class="modern-footnotes-footnote ">5</sup>, one evidence of it might be the weird glitch embedded in our reality called entanglement! Einstein called it a &#8220;spooky action at a distance&#8221;. The reason it is so terrifying is that we <em>know</em> that there is a cosmic speed limit, just like we know there is this thing called gravity. Nothing can travel faster than the speed of light. And yet, if two particles are entangled, they seemingly are able to communicate regardless of the distance between them. We can have one particle in the Milky Way and the other in the Andromeda galaxy (2.5 million light years away!). If we measure the Milky Way particle to have one state, the Andromeda one <em>instantly</em> acquires the opposite state. It is as if they instantly exchange information between them ignoring the cosmic speed limit. Do we actually know what we thought we knew?</p>
<p>A quantum computer is able to use the principle of entanglement by creating superpositions such that if we measure one &#8220;particle&#8221; to have state 1, the other is instantly in state 0.</p>
<p>Entanglement is one of those amazing mysteries in physics, and the following is a really good (new) documentary on it:</p>
<p><iframe src="//www.youtube.com/embed/Mn4AwineA5o" width="560" height="314" allowfullscreen="allowfullscreen"></iframe></p>
<p>But we can do more than just see a documentary. We can demonstrate entanglement on <strong>an actual real-life quantum computer! </strong>See the video below that I recorded demonstrating entanglement on an IBM quantum computer<sup class="modern-footnotes-footnote ">6</sup>.</p>
<div style="position: relative; padding-bottom: 56.25%; height: 0;"><iframe style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" src="https://www.useloom.com/embed/fc16ec263c804e978f280bfc31ffef9b" frameborder="0" allowfullscreen="allowfullscreen"><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span></iframe></div>
<h2></h2>
<h3>Last Word</h3>
<p>Quantum computing is in its infancy right now. The technology is very hard to do and the usefulness is rather academic right now. However, things could change very quickly. I suspect that this technology will have major consequences for things like secure communications and artificial intelligence. Perhaps the reason we have not yet build a sentient machine is that our brains are made of physical stuff, and may be accessing the world of the quantum. We might have strong Ai<sup class="modern-footnotes-footnote ">7</sup> if we built it using a quantum computer. At any rate &#8211; definitely one of the technologies to watch for the future!</p>
<h3>Additional Resources:</h3>
<p>A good explanation of quantum computing: <a href="https://cosmosmagazine.com/physics/quantum-computing-for-the-qubit-curious" target="_blank" rel="noopener noreferrer">link</a></p>
<div>1&nbsp;&nbsp;&nbsp;&nbsp;<a href="http://abyss.uoregon.edu/~js/glossary/quantum_tunneling.html" target="_blank" rel="noopener noreferrer">Click for description</a></div><div>2&nbsp;&nbsp;&nbsp;&nbsp;The distinction between P and NP represents one the most important and unsolved problems in computer science. Read more <a href="http://www.claymath.org/millennium-problems/p-vs-np-problem" target="_blank" rel="noopener noreferrer">here</a></div><div>3&nbsp;&nbsp;&nbsp;&nbsp;More on the possible effects of quantum computing on encryption <a href="https://www.economist.com/science-and-technology/2018/10/20/quantum-computers-will-break-the-encryption-that-protects-the-internet" target="_blank" rel="noopener noreferrer">here</a></div><div>4&nbsp;&nbsp;&nbsp;&nbsp;pun intended!</div><div>5&nbsp;&nbsp;&nbsp;&nbsp;See this video of Elon Musk laying out the argument for why we might be living in a simulation: <a href="https://www.youtube.com/watch?v=xBKRuI2zHp0" target="_blank" rel="noopener noreferrer">video</a></div><div>6&nbsp;&nbsp;&nbsp;&nbsp;See this <a href="https://www.youtube.com/watch?v=S52rxZG-zi0" target="_blank" rel="noopener noreferrer">video </a>for additional info</div><div>7&nbsp;&nbsp;&nbsp;&nbsp;Sentient and <em>general</em> intelligence machines rather than specific task-oriented Ai we have today</div><p>The post <a href="https://predictivemodeler.com/2019/02/24/quantum-computing/">Quantum Computing</a> appeared first on <a href="https://predictivemodeler.com">Predictive Modeler</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://predictivemodeler.com/2019/02/24/quantum-computing/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Introduction</title>
		<link>https://predictivemodeler.com/2019/01/24/introduction/</link>
					<comments>https://predictivemodeler.com/2019/01/24/introduction/#respond</comments>
		
		<dc:creator><![CDATA[Syed Mehmud]]></dc:creator>
		<pubDate>Thu, 24 Jan 2019 07:24:33 +0000</pubDate>
				<category><![CDATA[Principles]]></category>
		<guid isPermaLink="false">https://predictivemodeler.com/?p=2470</guid>

					<description><![CDATA[<p>My goal in this ebook is to get you most of the way towards creating powerful predictive models in a very short span of time, regardless of your prior experience! Features that make this book valuable There are many great resources, in print or on-line, for learning about Predictive Modeling. The following ideas make this [&#8230;]</p>
<p>The post <a href="https://predictivemodeler.com/2019/01/24/introduction/">Introduction</a> appeared first on <a href="https://predictivemodeler.com">Predictive Modeler</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>My goal in this ebook is to get you most of the way towards creating powerful predictive models in a very short span of time, regardless of your prior experience!</p>
<h5><span style="text-align: justify;"><br />
Features that make this book valuable</span></h5>
<p><span style="text-align: justify;">There are many great resources, in print or on-line, for learning about Predictive Modeling. The following ideas make </span><em style="text-align: justify;">this</em><span style="text-align: justify;"> work different and valuable to you:</span></p>
<ul>
<li><strong>Less is more</strong>: a key to the usefulness of this book is not only what is in it, but also what isn&#8217;t. I only include practically useful content and limit theoretical discussion. Especially when it comes to scientific content, quality is <em>much more</em> preferable than quantity.</li>
<li><strong>Practice makes perfect</strong>: the best way to learn about predictive modeling is by doing it, and the book aims to make that transition easy &amp; quick. The pages of this book contain working code which can be downloaded and run in a matter of minutes.</li>
<li><strong>Working smarter not harder</strong>: A lot of predictive modeling practitioners (myself included) have experienced spending 80% of our time tinkering with software, coding, or data wrangling&#8230;and only 20% of the time on the actual business application. One aim of the e-book is to <em>automate </em>a lot of that work. In doing so, we can make our processes (and not just the algorithms) smarter.</li>
<li><strong>Unique algorithms</strong>: the book contains original algorithms that I am experimenting with. It also contains heuristic adaptations of existing techniques geared for fast, practical, use.</li>
<li><strong>Quick &amp; easy</strong>: one does not need years of culinary training in order to cook up a wonderful meal. What we need is a good <em>recipe</em> and <em>ingredients</em>. Similarly, I believe that one does not need to be a mathematical savant<sup class="modern-footnotes-footnote ">1</sup> in order to practice predictive modeling.</li>
</ul>
<h5></h5>
<h5>Book organization</h5>
<p style="text-align: justify;">A table of contents is presented <a href="https://predictivemodeler.com/book/">here</a>. Even though many pages in the table of contents are not yet hyperlinked, they illustrate where the book is headed. I am adding new content to the book every week. You can scroll below the table of contents to see recently uploaded pages.</p>
<p>Each page or <em>post</em> has several sections. There may be a <em>prerequisite</em> section at the start of the post containing links to other posts that should be read first. The post may have a video tutorial section as well as a download area for working code.</p>
<p style="text-align: justify;">I hope that you enjoy my content and find it useful. Most importantly, I that hope you have fun learning &amp; applying predictive modeling!</p>
<div>1&nbsp;&nbsp;&nbsp;&nbsp;Being a savant helps for sure, however, special abilities are not necessary for exploring this field!</div><p>The post <a href="https://predictivemodeler.com/2019/01/24/introduction/">Introduction</a> appeared first on <a href="https://predictivemodeler.com">Predictive Modeler</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://predictivemodeler.com/2019/01/24/introduction/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The Fall and Rise of Ai</title>
		<link>https://predictivemodeler.com/2019/01/13/the-fall-and-rise-of-ai/</link>
					<comments>https://predictivemodeler.com/2019/01/13/the-fall-and-rise-of-ai/#respond</comments>
		
		<dc:creator><![CDATA[Syed Mehmud]]></dc:creator>
		<pubDate>Sun, 13 Jan 2019 15:11:20 +0000</pubDate>
				<category><![CDATA[Principles]]></category>
		<category><![CDATA[Ai]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<guid isPermaLink="false">https://predictivemodeler.com/?p=2358</guid>

					<description><![CDATA[<p>There is a revolution happening in analytics and Artificial Intelligence (Ai) is at its center. My goal with this post is to catch you up on the 80-year history of Ai by the time I finish my coffee. Fair warning, I do take longer than some! The idea of non-human intelligence has been around since [&#8230;]</p>
<p>The post <a href="https://predictivemodeler.com/2019/01/13/the-fall-and-rise-of-ai/">The Fall and Rise of Ai</a> appeared first on <a href="https://predictivemodeler.com">Predictive Modeler</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>There is a revolution happening in analytics and Artificial Intelligence (Ai) is at its center. My goal with this post is to catch you up on the 80-year history of Ai by the time I finish my coffee. Fair warning, I do take longer than some!</p>
<p>The idea of non-human intelligence has been around since antiquity. However, serious research on Artificial Intelligence (Ai) coincided roughly with the invention of the first programmable machines around 1940 <sup class="modern-footnotes-footnote ">1</sup>. Around the same time, Alan Turing&#8217;s work on the <a href="https://en.wikipedia.org/wiki/Theory_of_computation" target="_blank" rel="noopener"><em>theory of computation</em></a> showed that a machine could simulate almost any act of mathematical deduction just by shuffling ones and zeroes. If we presume human intelligence to be our ability to reason, it was suddenly and theoretically possible for a machine to mimic it. The promise of this research was immediately captivating and expectations soared high.</p>
<p>Lofty claims around a new technology are a double-edged sword. Funding pours into projects and companies, seemingly overnight. However, disappointment has a long memory and funding can dry up for years if expectations fall short. An <em>Ai winter</em> can set in.</p>
<p>There have been at least a couple such winters. The first one was during the Cold War. The US government wanted to translate Russian documents quickly. Instead, researchers quickly discovered that commonsense was not common or easy among machines. A story goes that efforts to translate &#8220;the spirit is good but the flesh is weak&#8221; from a Russian cable yielded &#8220;the vodka is good but the meat is rotten&#8221;<sup class="modern-footnotes-footnote ">2</sup>.</p>
<p>I think that our knowledge is <em>grounded</em> in our embodied experience. We experience reality with several senses and that shapes our conversation. Language is messy and constantly evolving. No wonder that the promise of Ai did not translate at the time. But I digress.</p>
<p>The second Ai winter in the seventies was a bit of an own goal. The Ai community split between those that saw promise in rigid, top-down <em>symbolic</em> Ai (e.g. expert systems), vs. flexible, bottom-up <em>connectionist</em> Ai (e.g. interconnecting artificial neurons). Symbolic Ai won a battle but lost sight of the war. Expert systems were all the rage for a while. They were too hard to maintain, did not learn, and were not fault-tolerant. Several years and millions of dollars later, they fell from grace by 1990. Important advances<sup class="modern-footnotes-footnote ">3</sup> were made in the theory of connectionist Ai, however computer processing power was just not there to apply them.</p>
<p>Research in Ai plateaued during the 90&#8217;s and 2000&#8217;s. Some members of the community re-branded themselves as <em>cognitive scientists</em>, working in <em>informatics</em>, <em>analytics</em>, or even <em>machine learning</em>. Watered-down&#8230;err&#8230;<em>narrower</em> definitions of Ai seemed technically feasible and <i>general </i>Ai considered a bit of a quack pursuit.</p>
<p>As a quick aside, <em>narrow</em> Ai is what is all around us today. Phone assistants like Siri, google search engine, image detection, or self-driving cars &#8211; are all examples of machines trained for specific (or, narrowly defined) tasks. General Ai is a machine that can do everything including being sentient. We don&#8217;t have one of those&#8230;yet.</p>
<p>Two things happened after 2010 that thawed the latest Ai winter. Researchers converged upon a specific type of connectionist neural network architecture as the best candidate for learning patterns from data. This will ultimately lead to <em>deep learning.</em> I will describe it in another post, but will note for now that it is remarkable that <em>every example </em>of Ai you see today uses essentially the same algorithm! This type of architecture required massive processing power. Available CPUs were struggling to keep up.</p>
<p>Cut to 2013. The stock of NVIDIA, a chip-maker for computer graphics, had been languishing in the low teens for years. The company noticed all these graduate students buying their GPUs<sup class="modern-footnotes-footnote ">4</sup>. They realized that these students had not become hardcore gamers overnight, but that the GPU architecture lends itself well to deep learning<sup class="modern-footnotes-footnote ">5</sup>. A GPU can have orders of magnitude more <em>cores</em> than a CPU. And neural networks are <em>massively parallelizable</em>. It is a marriage made in binary heaven. You can take chunks of a neural network and distribute them across GPU cores in order to compute them simultaneously. NVIDIA&#8217;s stock went from $25 a share in 2015 to almost $300 by 9/2018.</p>
<p>Narrow Ai surrounds us today. Even my thermostat tells me that it is learning. It sounds wonderful to have little Ai helpers take over mundane tasks like programming the thermostat, organizing pictures, or recommending which show to binge next. Such harmless fun. However, the likes of Bill Gates and Elon Musk have sounded off a warning alarm on the future potential of Ai.</p>
<p>One of my favorite exchanges involves the CEO of Facebook, Mark Zuckerberg, terming Ai naysayers as &#8220;pretty irresponsible&#8221; in a casual BBQ video. Elon was quick to respond with &#8220;I&#8217;ve spoken to Mark about this. His understanding of the subject is limited&#8221;. Ouch.</p>
<p>I am almost through my coffee, and I&#8217;ll save comments around the potential perils of <em>general</em> Ai for a future post. I&#8217;ll end by noting that this time it feels different. Saying that we have only achieved <em>narrow</em> Ai used to be a dig on the perceived unfulfilled promise of Ai. Today, we recognize it <em>as</em> Ai. In our pocket, in our home, and in our car. Our relationship with deep learning is becoming personal and pervasive. And our increased reliance on machines whose operation we no longer understand should concern us.</p>
<p>For Ai research and researchers, it does not feel like another winter is coming anytime soon.</p>
<div>1&nbsp;&nbsp;&nbsp;&nbsp;Z1 was invented by the German civil engineer Konrad Zuse between 1936 and 1938. This was the world&#8217;s first electromechanical and programmable computer that used binary logic</div><div>2&nbsp;&nbsp;&nbsp;&nbsp;Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2</div><div>3&nbsp;&nbsp;&nbsp;&nbsp;for example, <em>backpropagation</em></div><div>4&nbsp;&nbsp;&nbsp;&nbsp;Graphical Processing Units</div><div>5&nbsp;&nbsp;&nbsp;&nbsp;from the excellent interview of OpenAi&#8217;s Greg Brockman on <a href="https://www.youtube.com/watch?v=QRBRGPrGrKs" target="_blank" rel="noopener"><em>This Week in Startups</em></a></div><p>The post <a href="https://predictivemodeler.com/2019/01/13/the-fall-and-rise-of-ai/">The Fall and Rise of Ai</a> appeared first on <a href="https://predictivemodeler.com">Predictive Modeler</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://predictivemodeler.com/2019/01/13/the-fall-and-rise-of-ai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Blueprinting &#038; Prototyping</title>
		<link>https://predictivemodeler.com/2018/12/30/blueprinting-prototyping/</link>
					<comments>https://predictivemodeler.com/2018/12/30/blueprinting-prototyping/#respond</comments>
		
		<dc:creator><![CDATA[Syed Mehmud]]></dc:creator>
		<pubDate>Mon, 31 Dec 2018 03:49:51 +0000</pubDate>
				<category><![CDATA[Principles]]></category>
		<category><![CDATA[Blueprinting]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[Prototyping]]></category>
		<guid isPermaLink="false">https://predictivemodeler.com/?p=2067</guid>

					<description><![CDATA[<p>The concepts that I am referring to as blueprinting and prototype are hardly deserving of such fancy titles. And what I am about to write hardly deserves it&#8217;s own post. Yet these incredibly simple, yet powerful, concepts deserve any coder&#8217;s attention. Blueprinting Before you write any code, spend some time describing a blueprint of what you want [&#8230;]</p>
<p>The post <a href="https://predictivemodeler.com/2018/12/30/blueprinting-prototyping/">Blueprinting &#038; Prototyping</a> appeared first on <a href="https://predictivemodeler.com">Predictive Modeler</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The concepts that I am referring to as <em>blueprinting</em> and <em>prototype</em> are hardly deserving of such fancy titles. And what I am about to write hardly deserves it&#8217;s own post. Yet these incredibly simple, yet powerful, concepts deserve any coder&#8217;s attention.</p>
<h5><strong>Blueprinting</strong></h5>
<p>Before you write any code, spend some time describing a blueprint of what you want your code to achieve. Code can be hard to grasp, especially densely written logic. Keeping track of nested loops, variables, and interim tables will test the RAM of even the most experienced of coders! It is easy to feel lost even when reviewing code you have written yourself, especially if returning to it after a break.</p>
<p>I recommend writing code in a sequence of scripts (or discrete blocks with clearly demarcated functionality), and the first in that sequence is a plain-english description of what each script is intending to accomplish. Ordered bullet lists work better than descriptions in paragraph form. The simple exercise of writing down the functionality and it&#8217;s connection to the overall purpose of the predictive modeling exercise not only helps improve documentation, it can help write organized, streamlined, <em>better</em> &#8211; code.</p>
<h5><strong>Prototyping</strong></h5>
<p>Especially when dealing with large data sets, it can be extremely inefficient to write &amp; test code on the target data. Try to find a clean data entry point into your code-flow. And then write &amp; test your code simultaneously by allowing a small amount of target data through. For example, use only the top (or random) one-thousand rows of data, rather than millions, in order to complete your code development. Prototype your solution with small amounts of data, and once satisfied, run the whole thing through it. This will help you develop error-free code quickly.</p>
<p>Yep, told you the ideas don&#8217;t <em>look</em> like they deserve calling out in so many words. However, many that are new to coding especially in data-rich environments tend to overlook these. I promise you, these two simple concepts can save you hours if not days of re-work!</p>
<p>The post <a href="https://predictivemodeler.com/2018/12/30/blueprinting-prototyping/">Blueprinting &#038; Prototyping</a> appeared first on <a href="https://predictivemodeler.com">Predictive Modeler</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://predictivemodeler.com/2018/12/30/blueprinting-prototyping/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
