AI and technology thread

Economics, Politics, Philosophy, History and Culture
Forum rules
Always add something of value to the discussion and have fun. Mind your language, no porn, no libel, no trolling and no personal attacks.

Please note, views expressed on the forum do not necessarily represent the views of Mises UK. the Mises UK Council, Mises UK Faculty or Mises UK members. Membership of this forum is open to anyone world wide who is interested in civil discussion.
User avatar
Clayton
Posts: 133
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and technology thread

Post by Clayton » Sat Nov 17, 2018 5:35 pm

Addenda on QC

The lay person tends to associate the word "quantum" with weird, mystical, almost magical properties of the material world. I've explained in past posts on my blog that, while quantum mechanics obsolesced classical mechanics, quantum computation does not obsolesce classical computation, it only augments it. This is because both classical computation and quantum computation can be organized as two departments under a larger theoretical umbrella called information theory. Quantum information theory (QIT) does not revise or modify classical IT, it only tells you how IT behaves when you have access to quantum phenomena.

Another important point to keep in mind is that it is easy to specify classical problems that are so hard that even if you had an ideal quantum computer, it would provide no speedup in solving the problem. The famous halting problem is just one example of such a problem. And far from being arcane problems of purely academic interest, many of these problems are the very things we care most about solving. To understand this, let me give an example.

Suppose we have a math problem of interest (it could be some matrix multiplication or calculating some gradients or whatever). Now, we search through the literature and find that the best known algorithm is exponential time class. What this means is that, as the size of our problem grows, the running-time of the best known algorithm grows as an exponential. So, let's say when our problem size is 2 (say, a 2x2 matrix), the running time is 4 units of time. When the problem size is 3, the running time is 8, when the size is 4, the running time is 16, and so on, doubling the running time for each increment in problem size. Obviously, we are going to run into a brick wall very fast. At a problem size of just 30, our running time will already be 1,073,741,824 units of time. Even if our unit of time was 1 microsecond, the total running time would be over 16 minutes and would double to 32 minutes for a problem size of 31. So, it's obvious that this is a very tough problem, and there are a lot of tough problems like this that we would really like to be able to solve efficiently (e.g. simulating protein-folding).

An ideal quantum computer could provide what is called "exponential speedup". Let's apply exponential speedup to our hypothetical algorithm above and see what happens. When our problem size is 2, the running time (with exponential speedup) is 2. When the problem size is 3, the running time is 3. When the problem size is 30, the running time is just 30, almost a billion times faster! And that's the reason that people are so excited by the possibility of QC. No one in the QC field thinks that we will be able to achieve ideal speedup, that is, exponential speedup. (In fact, there are theoretical reasons why we cannot achieve maximum theoretical speedup). But even if we managed to build a crappy quantum computer that only achieves some polynomial speedup (e.g. square-power, or cube-power speedup), we could still solve huge swathes of very important problems! So, that's a good reason to want to build quantum computers.

But let's go back to those nasty classical problems like the halting problem. This is what makes them so nasty: even if you had a computer with exponential speed-up, it would be no help whatsoever in solving these problems any faster. Do you see the issue here? The best speedup that an ideal (un-realizable) quantum computer can achieve is still not fast enough to even make a dent in huge swathes of classical problems that are superlatively important. So, even if we succeed in building quantum computers at scale, we are still stuck in the same rut we are currently in, that is, we are still stuck with huge swathes of important, unsolved (un-solvable) problems. The only difference is that, yes, many important problems that we cannot solve today will become solvable. My reason for pointing this out is that it is important to keep the promises of QC in proportion to reality; much of the hyperventilated rhetoric surrounding QC makes it out to be some kind of universal panacea that is going to turn every important problem in math and science into a push-button calculation. This is not true, in fact, it's downright irresponsible for pop-science journalists to keep spreading this misinformation among the lay-public who simply don't have the qualifications required to discern pop-sci hype from grounded, scientific optimism.

If you're interested in learning more about the time-complexity of algorithms and this topic is new to you, I recommend the following video:



Key insight: Even having a quantum computer does not reduce NP to P (if P=NP, then a QC makes solving everything in NP faster, but if P=/=NP, then even a QC cannot help you solve problems in NP but not in P as though they were in P).

User avatar
Clayton
Posts: 133
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and technology thread

Post by Clayton » Sun Nov 25, 2018 12:51 am

Mathematics and deductive thinking



The "chaining" he discusses is called "logical depth" in algorithmic information theory. It is a measure of how many steps of deduction are required to go from one's premises to the conclusion (or theorem) of interest.

Unfortunately, the history of the development of mathematics has accidentally created a lot of obstacles to math-learning along the way. First off, we might ask what mathematics is about. There is no clear answer. "Math is about numbers" is probably the closest thing to a generally acceptable answer but if we ask "what are numbers?", we are back to square-A since many departments of math operate on irreconcilably different kinds of objects which can be thought of as numbers.

I propose that we can divide mathematical thought into roughly two separate departments -- functional mathematics and aesthetic mathematics. Most of the time, these two departments seem to completely overlap, since our most useful mathematical theories almost always seem to work out to be the most aesthetically pleasing, as well. In this post, I want to focus on the functional view of mathematical theory, even if that comes at the exclusion of aesthetics.

We can concretize functional mathematics in the following way. Let us suppose that we do not know how to multiply or divide. And let us suppose that we have a reasonably large pile of rocks -- let us say a meter high, each rock about the size of a clenched fist -- which we would like to count. Since we cannot multiply and since we do not have a scale or any method to measure an average weight (which would require division), we are stuck counting each rock by picking it up off the pile and throwing it off to a discard pile, making sure we do not lose count so that we do not have to start over. Now, let us suppose that we do know how to multiply (but not how to divide). We can now count out the rocks much more quickly by getting on top of the pile and kicking the rocks off into a completely flat pile and aiming for a roughly square shape along the ground. Once we have flattened the pile, all we have to do is line the rocks up in even rows, like a grid, in an approximately square shape. Once all the rocks are aligned, vertically and horizontally, we only need to count the number of rows, and the number of columns, and multiply, then subtract the remainder (if any) due to the fact that the number of rocks might not be exactly square. Thus, we see that multiplication is a very human kind of thing, it is a practical shortcut to counting one-by-one or even performing repetitive sums (like adding the number 5 to itself, over and over). If we know how to divide, we can calculate the number of rocks even more quickly by weighing random samples of rocks and figuring out an average weight per rock, then weighing off all the rocks in a single go (assuming our scale can handle that much weight) and then dividing by the average weight. Our result will not be exact, that is, it will have some range of error. But if we have so many rocks, we probably don't care to count them down to each individual rock.

Understanding this "bridge" between the practical application of mathematics and the abstract theory of mathematics is essential. It is easy for those who are not mathematically inclined to forget just how much theoretical analysis goes into our everyday life -- everything from the gadgets we carry (cellphones) to the machines that carry us (cars, trains, planes, etc.) The engineering that makes these devices real is built on a theoretical foundation of mathematics. And this theoretical foundation is really all about finding shortcuts, like the rock-counting shortcuts I explained above. If we squint, we can say that the rock-counter and the rocket engineer are doing precisely the same thing, just to different tolerances (precision) and using deeper/shallower theoretical tools (tools that rely on longer or shorter chains of deduction).

It can be argued that mathematics is a much more academic field than other sciences. I don't have any numbers to support my hunch, but my hunch is that someone with an advanced degree in mathematics is much more likely to go into academia than someone with an advanced degree in physics or another science. There's nothing better or worse about this, but I think it does contribute to the "math echo chamber" or the "academic ivory tower" that many of us perceive in the body of mathematics. It can sometimes seem impossible to figure out what a given subject of mathematics is even about, let alone what is the significance of its theoretical results. Popular explanations are often so dumbed-down as to be useless. In short, it seems that the only way to learn certain subjects is to go to university and take a course in that subject... a cost that is prohibitively high merely to satisfy a generic curiosity.

I don't think that there is any easy solution to this problem for the foreseeable future but I do think that people should get over this idea that mathematics is "not for me." Mathematics is nothing more or less than completely organized thinking. Mathematics emphasizes organization over application because the real world is messy and that messiness can prevent us from finding solutions to solvable problems by imposing irrelevant conditions on the solution. For example, many of the most celebrated results in mathematics (e.g Euclid's theorem) are proved by use of infinite sets or other infinity-based arguments. Of course, there is no actual infinitude in the real world. If we restricted ourselves to only reasoning about numbers in ways that can, at every point, be applied to the real world, then we would never be able to use infinite sets (because there are no real infinite sets) and we could never prove basic facts about the natural numbers that are so useful that they pop up in countless other mathematical theorems.

Proponents of the praxeological method are liable to see the value in deductive reasoning, including mathematics. However, there is a certain "allergy" to the intentional non-realism of mathematical methods when applied to economics, especially aggregating methods like those employed by Keynes. These objections are well-founded but recent developments in mathematics in the era of computation are changing the relevance of objections to the older style of abstraction. A determined person could categorize praxeology as a department of game-theory where the constraints of the game(s) to be played are bound at every point by real-world conditions. In turn, game theory and computation are merging to enable richly textured simulations of human behavior at scales never before possible. As with any chaotic system (e.g. the weather), simulation cannot provide iron-clad, deterministic predictions about the eventual evolution of the system... but as simulations improve, they tend to be more and more correct, more and more often. We cannot wave aside such simulations on the basis that they are sometimes completely wrong, and rarely, if ever, completely correct. The fact that they are often pretty correct is significant and has profound implications for economic science.

I'll close with a visual proof of Pythagoras's theorem which I find much more compelling than symbolic proofs:


User avatar
Clayton
Posts: 133
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and technology thread

Post by Clayton » Wed Dec 05, 2018 4:45 am

Just watched this, highly recommend:



I'm somewhere in the spectrum between Goertzel and McKenna in terms of how I look at the world (symbology vs. empiricism) but Goertzel's views of AGI and the Singularity, in general, are very balanced and I can't help finding myself agreeing with him almost completely. There's an AI tsunami coming and it's going to change everything, and it will almost certainly happen faster than the person on the street imagines such widespread change to be possible. Consider this article I just read today: AI is changing the way aircraft are built. The changes in how AI is able to save weight, fuel and energy are visible. Aircraft designers are leveraging AI to design components that no human could ever design because the number of possible combinations is just too high. The AI is able to compute millions, billions or even trillions of possible combinations, measure strength versus weight for each one, rank them, and then grow new designs based on these. This kind of approach, where designers make the machine do the heavy-lifting, is only going to spread to other fields and it is going to yield more and more gains as AI algorithms become increasingly more sophisticated and power-efficient.

User avatar
Clayton
Posts: 133
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and technology thread

Post by Clayton » Fri Dec 07, 2018 3:02 am

Fairly accessible, especially the introduction:



Artificial neural networks are only the beginning. Brace yourself for A-life....

User avatar
Jon Irenicus
Posts: 165
Joined: Sun Mar 25, 2018 9:36 pm

Re: AI and technology thread

Post by Jon Irenicus » Fri Dec 07, 2018 7:23 pm

Former overlord of the original Mises forum.

User avatar
Clayton
Posts: 133
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and technology thread

Post by Clayton » Sat Dec 08, 2018 2:09 am

While DeepMind keeps knocking home runs out of the ball-park, I am keeping a highly skeptical eye on their claims because no one can actually independently verify them. DeepMind can get away with this because, well, even if they published all their work (and they're not going to do that), nobody else has access to the Google-scale compute resources that would be required to actually cross-check the results that DeepMind is getting. Given Google's blatant abandonment of its motto ("don't be evil"), I am all the more wary of DeepMind. They're probably a good 2-3 years ahead of the curve (by comparison to the rest of the ML community) just by virtue of the massive compute resources at their disposal... so the upside is that we know that ML is capable of massive improvement as it scales up (in other words, we're not chasing a law-of-diminishing-returns).

Post Reply