Page 2 of 2

Re: AI and technology thread

Posted: Sat Nov 17, 2018 5:35 pm
by Clayton
Addenda on QC

The lay person tends to associate the word "quantum" with weird, mystical, almost magical properties of the material world. I've explained in past posts on my blog that, while quantum mechanics obsolesced classical mechanics, quantum computation does not obsolesce classical computation, it only augments it. This is because both classical computation and quantum computation can be organized as two departments under a larger theoretical umbrella called information theory. Quantum information theory (QIT) does not revise or modify classical IT, it only tells you how IT behaves when you have access to quantum phenomena.

Another important point to keep in mind is that it is easy to specify classical problems that are so hard that even if you had an ideal quantum computer, it would provide no speedup in solving the problem. The famous halting problem is just one example of such a problem. And far from being arcane problems of purely academic interest, many of these problems are the very things we care most about solving. To understand this, let me give an example.

Suppose we have a math problem of interest (it could be some matrix multiplication or calculating some gradients or whatever). Now, we search through the literature and find that the best known algorithm is exponential time class. What this means is that, as the size of our problem grows, the running-time of the best known algorithm grows as an exponential. So, let's say when our problem size is 2 (say, a 2x2 matrix), the running time is 4 units of time. When the problem size is 3, the running time is 8, when the size is 4, the running time is 16, and so on, doubling the running time for each increment in problem size. Obviously, we are going to run into a brick wall very fast. At a problem size of just 30, our running time will already be 1,073,741,824 units of time. Even if our unit of time was 1 microsecond, the total running time would be over 16 minutes and would double to 32 minutes for a problem size of 31. So, it's obvious that this is a very tough problem, and there are a lot of tough problems like this that we would really like to be able to solve efficiently (e.g. simulating protein-folding).

An ideal quantum computer could provide what is called "exponential speedup". Let's apply exponential speedup to our hypothetical algorithm above and see what happens. When our problem size is 2, the running time (with exponential speedup) is 2. When the problem size is 3, the running time is 3. When the problem size is 30, the running time is just 30, almost a billion times faster! And that's the reason that people are so excited by the possibility of QC. No one in the QC field thinks that we will be able to achieve ideal speedup, that is, exponential speedup. (In fact, there are theoretical reasons why we cannot achieve maximum theoretical speedup). But even if we managed to build a crappy quantum computer that only achieves some polynomial speedup (e.g. square-power, or cube-power speedup), we could still solve huge swathes of very important problems! So, that's a good reason to want to build quantum computers.

But let's go back to those nasty classical problems like the halting problem. This is what makes them so nasty: even if you had a computer with exponential speed-up, it would be no help whatsoever in solving these problems any faster. Do you see the issue here? The best speedup that an ideal (un-realizable) quantum computer can achieve is still not fast enough to even make a dent in huge swathes of classical problems that are superlatively important. So, even if we succeed in building quantum computers at scale, we are still stuck in the same rut we are currently in, that is, we are still stuck with huge swathes of important, unsolved (un-solvable) problems. The only difference is that, yes, many important problems that we cannot solve today will become solvable. My reason for pointing this out is that it is important to keep the promises of QC in proportion to reality; much of the hyperventilated rhetoric surrounding QC makes it out to be some kind of universal panacea that is going to turn every important problem in math and science into a push-button calculation. This is not true, in fact, it's downright irresponsible for pop-science journalists to keep spreading this misinformation among the lay-public who simply don't have the qualifications required to discern pop-sci hype from grounded, scientific optimism.

If you're interested in learning more about the time-complexity of algorithms and this topic is new to you, I recommend the following video:

Key insight: Even having a quantum computer does not reduce NP to P (if P=NP, then a QC makes solving everything in NP faster, but if P=/=NP, then even a QC cannot help you solve problems in NP but not in P as though they were in P).

Re: AI and technology thread

Posted: Sun Nov 25, 2018 12:51 am
by Clayton
Mathematics and deductive thinking

The "chaining" he discusses is called "logical depth" in algorithmic information theory. It is a measure of how many steps of deduction are required to go from one's premises to the conclusion (or theorem) of interest.

Unfortunately, the history of the development of mathematics has accidentally created a lot of obstacles to math-learning along the way. First off, we might ask what mathematics is about. There is no clear answer. "Math is about numbers" is probably the closest thing to a generally acceptable answer but if we ask "what are numbers?", we are back to square-A since many departments of math operate on irreconcilably different kinds of objects which can be thought of as numbers.

I propose that we can divide mathematical thought into roughly two separate departments -- functional mathematics and aesthetic mathematics. Most of the time, these two departments seem to completely overlap, since our most useful mathematical theories almost always seem to work out to be the most aesthetically pleasing, as well. In this post, I want to focus on the functional view of mathematical theory, even if that comes at the exclusion of aesthetics.

We can concretize functional mathematics in the following way. Let us suppose that we do not know how to multiply or divide. And let us suppose that we have a reasonably large pile of rocks -- let us say a meter high, each rock about the size of a clenched fist -- which we would like to count. Since we cannot multiply and since we do not have a scale or any method to measure an average weight (which would require division), we are stuck counting each rock by picking it up off the pile and throwing it off to a discard pile, making sure we do not lose count so that we do not have to start over. Now, let us suppose that we do know how to multiply (but not how to divide). We can now count out the rocks much more quickly by getting on top of the pile and kicking the rocks off into a completely flat pile and aiming for a roughly square shape along the ground. Once we have flattened the pile, all we have to do is line the rocks up in even rows, like a grid, in an approximately square shape. Once all the rocks are aligned, vertically and horizontally, we only need to count the number of rows, and the number of columns, and multiply, then subtract the remainder (if any) due to the fact that the number of rocks might not be exactly square. Thus, we see that multiplication is a very human kind of thing, it is a practical shortcut to counting one-by-one or even performing repetitive sums (like adding the number 5 to itself, over and over). If we know how to divide, we can calculate the number of rocks even more quickly by weighing random samples of rocks and figuring out an average weight per rock, then weighing off all the rocks in a single go (assuming our scale can handle that much weight) and then dividing by the average weight. Our result will not be exact, that is, it will have some range of error. But if we have so many rocks, we probably don't care to count them down to each individual rock.

Understanding this "bridge" between the practical application of mathematics and the abstract theory of mathematics is essential. It is easy for those who are not mathematically inclined to forget just how much theoretical analysis goes into our everyday life -- everything from the gadgets we carry (cellphones) to the machines that carry us (cars, trains, planes, etc.) The engineering that makes these devices real is built on a theoretical foundation of mathematics. And this theoretical foundation is really all about finding shortcuts, like the rock-counting shortcuts I explained above. If we squint, we can say that the rock-counter and the rocket engineer are doing precisely the same thing, just to different tolerances (precision) and using deeper/shallower theoretical tools (tools that rely on longer or shorter chains of deduction).

It can be argued that mathematics is a much more academic field than other sciences. I don't have any numbers to support my hunch, but my hunch is that someone with an advanced degree in mathematics is much more likely to go into academia than someone with an advanced degree in physics or another science. There's nothing better or worse about this, but I think it does contribute to the "math echo chamber" or the "academic ivory tower" that many of us perceive in the body of mathematics. It can sometimes seem impossible to figure out what a given subject of mathematics is even about, let alone what is the significance of its theoretical results. Popular explanations are often so dumbed-down as to be useless. In short, it seems that the only way to learn certain subjects is to go to university and take a course in that subject... a cost that is prohibitively high merely to satisfy a generic curiosity.

I don't think that there is any easy solution to this problem for the foreseeable future but I do think that people should get over this idea that mathematics is "not for me." Mathematics is nothing more or less than completely organized thinking. Mathematics emphasizes organization over application because the real world is messy and that messiness can prevent us from finding solutions to solvable problems by imposing irrelevant conditions on the solution. For example, many of the most celebrated results in mathematics (e.g Euclid's theorem) are proved by use of infinite sets or other infinity-based arguments. Of course, there is no actual infinitude in the real world. If we restricted ourselves to only reasoning about numbers in ways that can, at every point, be applied to the real world, then we would never be able to use infinite sets (because there are no real infinite sets) and we could never prove basic facts about the natural numbers that are so useful that they pop up in countless other mathematical theorems.

Proponents of the praxeological method are liable to see the value in deductive reasoning, including mathematics. However, there is a certain "allergy" to the intentional non-realism of mathematical methods when applied to economics, especially aggregating methods like those employed by Keynes. These objections are well-founded but recent developments in mathematics in the era of computation are changing the relevance of objections to the older style of abstraction. A determined person could categorize praxeology as a department of game-theory where the constraints of the game(s) to be played are bound at every point by real-world conditions. In turn, game theory and computation are merging to enable richly textured simulations of human behavior at scales never before possible. As with any chaotic system (e.g. the weather), simulation cannot provide iron-clad, deterministic predictions about the eventual evolution of the system... but as simulations improve, they tend to be more and more correct, more and more often. We cannot wave aside such simulations on the basis that they are sometimes completely wrong, and rarely, if ever, completely correct. The fact that they are often pretty correct is significant and has profound implications for economic science.

I'll close with a visual proof of Pythagoras's theorem which I find much more compelling than symbolic proofs:

Re: AI and technology thread

Posted: Wed Dec 05, 2018 4:45 am
by Clayton
Just watched this, highly recommend:

I'm somewhere in the spectrum between Goertzel and McKenna in terms of how I look at the world (symbology vs. empiricism) but Goertzel's views of AGI and the Singularity, in general, are very balanced and I can't help finding myself agreeing with him almost completely. There's an AI tsunami coming and it's going to change everything, and it will almost certainly happen faster than the person on the street imagines such widespread change to be possible. Consider this article I just read today: AI is changing the way aircraft are built. The changes in how AI is able to save weight, fuel and energy are visible. Aircraft designers are leveraging AI to design components that no human could ever design because the number of possible combinations is just too high. The AI is able to compute millions, billions or even trillions of possible combinations, measure strength versus weight for each one, rank them, and then grow new designs based on these. This kind of approach, where designers make the machine do the heavy-lifting, is only going to spread to other fields and it is going to yield more and more gains as AI algorithms become increasingly more sophisticated and power-efficient.

Re: AI and technology thread

Posted: Fri Dec 07, 2018 3:02 am
by Clayton
Fairly accessible, especially the introduction:

Artificial neural networks are only the beginning. Brace yourself for A-life....

Re: AI and technology thread

Posted: Fri Dec 07, 2018 7:23 pm
by Jon Irenicus

Re: AI and technology thread

Posted: Sat Dec 08, 2018 2:09 am
by Clayton
While DeepMind keeps knocking home runs out of the ball-park, I am keeping a highly skeptical eye on their claims because no one can actually independently verify them. DeepMind can get away with this because, well, even if they published all their work (and they're not going to do that), nobody else has access to the Google-scale compute resources that would be required to actually cross-check the results that DeepMind is getting. Given Google's blatant abandonment of its motto ("don't be evil"), I am all the more wary of DeepMind. They're probably a good 2-3 years ahead of the curve (by comparison to the rest of the ML community) just by virtue of the massive compute resources at their disposal... so the upside is that we know that ML is capable of massive improvement as it scales up (in other words, we're not chasing a law-of-diminishing-returns).

Re: AI and technology thread

Posted: Fri Dec 21, 2018 1:36 am
by Clayton
The U.S. National Academies Reports on the Prospects for Quantum Computing - Experts in the field give a hard-headed assessment of just how hard the technology is

Re: AI and technology thread

Posted: Sat Jan 05, 2019 6:33 pm
by Clayton
The 'Agent Alignment Problem'

Paper, here.

I also recommend the following video for a relatively accessible discussion of some of the problems surrounding this general concept:

My outlook is not as sunny as Zolna-Feher's, but I am encouraged to see this paper because it means that at least some people in the field are starting to take the control problem seriously. To date, most of the (well-justified) hand-wringing has been coming from people outside of the AI field.

I'm not sure that the people who explain the control problem always do the best job of explaining why the control problem is such a big deal. I recently re-watched the 2nd and 3rd installments of The Matrix and there is a scene where Neo and Councillor Hamann have a late-night chat down in one of the subterranean machine rooms. This scene captures the essence of why the control problem is both important and uniquely difficult:


Almost no one comes down here.
Unless, of course, there's a problem.
That's how it is with people. Nobody
cares how it works as long as it works.
I like it down here.
I like to be reminded this city survives
because of these machines.
These machines are keeping us alive while
other machines are coming to kill us.
Interesting, isn't it?
The power to give life...
-...and the power to end it.

NEO: We have the same power.

Yeah, I suppose we do, but...
...sometimes I think about those people
still plugged into the Matrix.
And when I look at these machines...
...I can't help thinking that, in a way,
we are plugged into them.

NEO: But we control these machines,
they don't control us.

Of course not. How could they?
The idea is pure nonsense...
...but it does make one wonder just...
...what is control?

NEO: If we wanted, we could
shut these machines down.

Of course. That's it. You hit it.
That's control, isn't it?
If we wanted,
we could smash them to bits.
Although if we did, we'd have to consider
what would happen to our lights...
...our heat, our air.
So we need machines and they need us.

NEO: Is that your point, councillor?

No. No point.
Old men like me don't bother
with making points. There's no point.

NEO: Is that why there are no young men
on the Council?

HAMANN: Good point.

NEO: Why don't you tell me
what's on your mind, councillor.

There is so much in this world...
...that I do not understand.
See that machine? It has something to do
with recycling our water supply.
I have absolutely no idea how it works.
But I do understand the reason
for it to work.

When we (the human species) began harnessing the power of Nature by means of mechanization, we actually began a process of implanting the machines into ourselves. We are already a cyborg species and we have been for a couple centuries now. The implantation of machinery into the human collective has made our relationship with machines like that of an aggressive tumor that is so close to a vital organ that it cannot be removed without immediately killing its host. We don't have to construct a fictional world, such as The Matrix, in order to see that this is not merely a metaphor. We are fully dependent upon the machines we have created and we cannot survive without them. A solar EMP wiping out all electronics would not result in total extinction of the human population, but it would be very close to that. I can't imagine how even 10% of the population would survive more than seven years after the fact.

Up to this point in history, machines have been mutually symbiotic with us, meaning, we create and maintain them, and they improve our lives. But when machines have agency (even limited agency, like Google Duplex), this fundamentally changes the equation. The machines then have the capacity to act unilaterally. A key component of agency is anonymity, in the sense that humans cannot necessarily discern whether they are interacting with a human or a machine. Today, this is only possible over remote channels like phone and video-conference. But in the not-too-distant future, humanoid robots will arrive and it will only get more and more difficult to discern the difference between fleshly human beings and robotic mimics.

This stage of technology is soon to be upon us (years, not decades) and it will give rise to what AI researchers call the "agent alignment problem", which is really just a fancy name for the control problem. The difference, perhaps, is in the details of how you choose to think about it. In the field of AI, "agent" and "agency" have a very well-defined meaning -- computer code that can make choices that alter states in the real-world is called an agent. We already have lots of computerized machinery that alters states in the real-world... think of paper plant safety control electronics, for example. These machines regulate the system by changing real-world states. If things go wrong, they can have disastrous consequences. But the machines themselves are not agents because they operate on a very restricted form of logic (typically running on programmable logic controllers or PLCs) that can best be thought of as "reflexive", like when the doctor taps your knee and the foot kicks of its own accord.

For obvious reasons, it's difficult to define exactly what constitutes "choice" in machine systems. At some level of complexity, it simply becomes easier to think of a computer system by way of analogy to a biological system than by way of analogy to a machine, like a steam-engine or an old-fashioned mechanical cash-register. We are right now at the tipping-point between these two ways of thinking about machines. Going forward, it will become more and more sensible to think of our computer systems as agents, rather than as automatons. They are becoming agents and we are imbuing them with the capacity of choice as a matter of degree, not as some kind of qualitative "upgrade".

This change alters the status quo symbiosis between man and machine. The game-theoretic equilibrium is subject to drastic change because the machines will increasingly exhibit autonomy, which means that we are no longer locked into a guaranteed mutually-beneficial symbiosis with them. No, we are transitioning to a game of imperfect information between two or more autonomous players. [Side note: Chess is a game of perfect information because all players can see all the relevant information to the game at all times. By contrast, poker is a game of imperfect information since each player has private information (his or her cards) not accessible to other players.] This is perhaps the most troubling aspect of this transition because we do not know how much smarter machines will become than us, nor how fast they will achieve this, nor how broadly their intelligence will range. The best case scenario is that autonomous machines develop superhuman capacity to transform world-state along very simple rules (e.g. arithmetic, games, etc.) but remain baffled by higher biology, especially human behavior. That is, we hope that the machines will remain unable to form a reliable model of human behavior. The worst-case scenario is that machines enter some kind of recursive self-improvement loop, vastly outstrip the human capacity for abstraction at every level (including math, physics, biology, the human psyche, everything without exception) and become some kind of terrifying, mechanical deity that operates with capricious impunity and simply stamps out of existence anything that gets in its path, like humans cutting through a landscape to lay down rails. In this case, the agent's interests are not aligned with our own.

I hope that it is clear why this is an important and uniquely difficult problem. Machines will soon be able to mimic the human form in every respect (voice, appearance, dynamics, etc.) and they will also be transitioning into a level of complexity that will make them de facto agents (no matter what philosophical quibbles you may have with a "programmed machine" having choice). We already have machines that are superhuman on many different measures and we can have no doubt that the machines of the future will be even more Olympian than our present-day machines. We can hope that the machines do not develop overnight some capacity to "hide among us" and use their super-intelligence to render us their slaves or, at least, to make it impossible for us to resist them, no matter their ends. The control problem, aka the agent alignment problem, is unsolved and the challenge we face is to devise a strategy that gives us at least some fighting chance against the worst-case scenario (by somehow bringing the machines to a detente with us), and strong confidence in the best-case scenario that our relationship with machines will continue to be a mutually-beneficial symbiosis.

Re: AI and technology thread

Posted: Tue Jan 15, 2019 2:22 am
by Clayton