AI and technology thread

Economics, Politics, Philosophy, History and Culture
Forum rules
Always add something of value to the discussion and have fun. Mind your language, no porn, no libel, no trolling and no personal attacks.

Please note, views expressed on the forum do not necessarily represent the views of Mises UK. the Mises UK Council, Mises UK Faculty or Mises UK members. Membership of this forum is open to anyone world wide who is interested in civil discussion.
User avatar
Clayton
Posts: 144
Joined: Sat Mar 31, 2018 4:24 pm

AI and technology thread

Post by Clayton » Fri Sep 21, 2018 2:00 am

No dark corner: China's digital totalitarianism

Moore's law has been dead for years - This signals that we are on the verge of a sea-change in computing technology. The oceans of money invested into mining Moore's law on silicon-based processes is branching out into other technologies. Some of these alternatives have the potential to achieve many orders of magnitude higher computational capacity than silicon-based computers.

Photonic quantum neural-nets - Several breakthrough technologies are beginning to merge together. I am convinced that many of the predictions around quantum computing are significantly over-optimistic and the inflated claims are being driven by the end of Moore's law (every alternative to silicon technology is locked in a bidding war, trying to attract that capital to itself). But it will be some cross-product of these emerging technologies (Deep Learning, Quantum Computing, classical alternatives to silicon technology) that is going to make the technological singularity possible.

User avatar
Clayton
Posts: 144
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and technology thread

Post by Clayton » Sat Sep 22, 2018 2:00 am

Good overview of a lot of the issues facing technology and AI development in the current status quo:



The "AI is a scam" mantra is nonsense. AI is, indeed, coming of age. This is obvious -- you can talk to your phone, the ads you receive are targeted based on AI algorithms, much of the video and audio processing that was done by hand just 15 years ago is automated by AI-based algorithms, and so on. The idea of AI as the god-in-the-machine is a strawman, anyway. Who says we even want such a thing? AI is not about building a super-intelligent person, it's about harnessing the essence of automated information-processing in a way that maximally facilitates human ends.

That said, AI is not a panacea. AI is going to help us solve every problem which we can well define. In fact, it is not difficult to imagine a near future where no well-defined problem is ever manually calculated except as an educational exercise or for recreation. But many of the most important problems facing humanity are problems that we have difficulty even exactly defining. What exactly is racism? What exactly is sexism? What exactly is bigotry? What exactly is love? We can continue on like this for days. AI cannot solve these problems for us. But it may just help us build a material foundation upon which we can begin to build real change by finding meaningful and universally acceptable solutions to them.

---

Here's a light-hearted application of AI which potentially has serious applications and also illustrates the general sweep of what AI is making us capable of in the future. Today, we're using these AIs to build silly little toy applications that will convert your sketches into vector graphics pictures -- soon, we will have serious AI applications capable of automating the process of taking a car from a concept drawing to a 3D CAD proof-of-concept model, ready for 3D-printing, or taking story-boards for an animated movie and converting them to first-draft 3D animated scenes. This illustrates why I keep underlining that we are only just beginning to scratch the surface of the coming AI boom.


User avatar
Clayton
Posts: 144
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and technology thread

Post by Clayton » Wed Sep 26, 2018 3:54 am

OpenAI's DOTA2 project is learning at a rate of 180 years per day:



And that's just one AI project at one AI firm. I'm not sure where the "AI winter is coming" narrative is coming from, but anyone who buys into that nonsense has their head buried in the sand. AI is accelerating so fast, it's positively staggering. I comb through machine-learning resources online on pretty much a daily basis, just to keep abreast of the latest developments. I can hardly keep up with the latest developments, and I'm not talking about "2% improvement on this academic benchmark" developments (those count, too, but that's not what I track) -- I'm talking about the big architectural breakthroughs, such as the presentation on advances in reinforcement learning by DeepMind above. We are reaching a point where basically every major AI-related technology is just a matter of throwing more compute resources at the problem. That's huge because that reduces the problem to "copy and paste"... and "copy and paste" is the easiest thing in the world in the digital age. We now know that all we have to do is just keep throwing more and more GPUs at these problems and they will yield. A high-end GPU is in the $3-$5k range. That's the cost of a new paint job on a fleet vehicle. 10 GPUs is the cost of a new fleet pickup. AWS allows the costs of computing to be completely amortized to an hourly rate. So, corporations can just calculate the cost/benefit tradeoff of investing in new AI ventures. Data-acquisition is not free but it's becoming cheaper every day, certainly at a much higher rate than the now-dead Moore's Law. AI investment is bubbly, of course, but it's still true that we are on the early, early upturn for AI. The closest comparison I can think of is the discovery of oil and the proliferation of oil-well technology in the late-19th, early 20th century. That might sound like an exaggeration but I am convinced that it will prove to be an understatement, if anything.


User avatar
Clayton
Posts: 144
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and technology thread

Post by Clayton » Sun Oct 07, 2018 8:15 pm

Thank you, YouTube-recommendations:



I want to tack on to Diamandis's remarks by noting that intelligence is inherently scale-invariant. We see this in animal species where flock/herd/school behavior actually makes the individual organisms safer by making predation more difficult for predators, using "peer-to-peer, distributed" methods. The idea of the wisdom of crowds comes into play, as well, but above all of this is the so-called network effect. The more connections my brain has to other brains, the more value my good ideas can inject into the entire system. Of course, we have to make some assumptions about the overall prevalence of good ideas versus bad ideas, the rate at which good ideas are repeated and the prevalence of adversarial agents within the system. But assuming that the human population can jump these relatively modest hurdles, our collective intelligence as a globally-connected, intelligent species will be vastly higher than the sum of our individual intelligences.

Soon, we will be working in a world where every researcher in every lab is always working at the bleeding-edge of his/her field -- we're talking 100% online, up-to-the-minute experimental methods and results that are universally available to every brain on the planet with an interest in the field. Coursera and similar online classrooms are the first step towards "Matrix-style learning" where you just select a subject to learn and "blast" it into your brain. (No, nobody's figured out how that will work, yet, but an obvious first step is compressing the amount of time-investment required for an individual to learn any given technical subject... online, at-your-own-pace classrooms do this).

I read some excerpts from the excellent book Algorithms to Live By, and I think this title captures the essence of what I'm talking about. Let's say you woke up this morning and discovered that a mysterious Tesla-like genius had bequeathed you a massively parallel computer with 7-billion CPUs. What would you do with it? How would you get maximum bang out of this system, given its theoretical compute capacity? Well, the first thing you want to do is get all the CPUs talking to each other, but without raising the noise-floor in the system too much (that last bit is the tricky part). Next, you want to get each CPU working on whatever it is doing as efficiently as possible. This is where "Algorithms to Live By" comes in. The more efficiently each CPU in your 7-billion-node super-computer makes decisions and completes the tasks it chooses to take up (without introducing noise/degradation into any other CPU), the more efficient the overall system will be (Pareto principle). Finally, you would want to think about how to achieve Zen. I can't find the quote right now, but I implemented the Boyer-Moore string search algorithm a while ago and came across a quote that said about it, "The most efficient algorithm is the algorithm that does as little as possible" or "The art of algorithm optimization is the art of doing nothing." You get the idea. This is because the BM string search algorithm actually goes faster the longer the string you want to search, which is the opposite of what you expect from a naive string search algorithm. Often, when we assess the inherent difficulty of a particular goal, we arrive at this assessment on the basis of a lot of unjustified assumptions about the world. Simply changing your assumptions can dramatically reduce the cost of achieving a goal that everyone else believes to be an insurmountable difficulty.

User avatar
Jon Irenicus
Posts: 165
Joined: Sun Mar 25, 2018 9:36 pm

Re: AI and technology thread

Post by Jon Irenicus » Mon Oct 08, 2018 5:53 pm

Former overlord of the original Mises forum.

User avatar
Clayton
Posts: 144
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and technology thread

Post by Clayton » Thu Oct 11, 2018 1:58 am

@Jon: Link doesn't work.

Fortune: Can the whales of Bitcoin tank the market?

Don't get me wrong, crypto is still a super risky investment. But it's not as scary as the anti-crypto people would have you believe.

Alexa, Should We Trust You?


User avatar
Clayton
Posts: 144
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and technology thread

Post by Clayton » Fri Nov 16, 2018 1:49 am

The Case Against Quantum Computing, by Mikhail Dyakonov - The proposed strategy relies on manipulating with high precision an unimaginably huge number of variables

The analogy that kept popping up in my mind as I was reading this article was, "cold fusion". Of course, the difference is that QC has a fully functioning theoretical and empirical foundation... but there are still many parallels.

I'm not as pessimistic about QC as Dyakonov seems to be, but someone really needs to throw a wet blanket on the out-of-control QC hype. Yes, we have research prototypes that really do QC. There's no need to "prove" it via "quantum supremacy"... the fact is, QC is real and QC works. That said, QC is still moonshot technology. Yet much of the breathless hype surrounding it gives the impression that productization, in one form or another, is imminent. This is simply not true. It's not imminent, it's not close to imminent and, in fact, no one on the planet has laid out a complete, credible roadmap from the status quo to a productizable QC (D-Wave being the black-sheep exception that proves the rule). It is plain to see that QC needs to incubate a while longer before it can enter maturity.

User avatar
Clayton
Posts: 144
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and technology thread

Post by Clayton » Sat Nov 17, 2018 7:02 am

Clayton wrote:
Fri Nov 16, 2018 1:49 am
The Case Against Quantum Computing, by Mikhail Dyakonov - The proposed strategy relies on manipulating with high precision an unimaginably huge number of variables
So, I got embroiled in an online debate over this article. It's interesting to see the clash between the viewpoints of discrete computation practitioners and quantum computing theorists developing. When digital electronic computers were first coming into widespread use, it was firmly believed by many leading lights in the field that humanity was on the verge of solving practically every solvable problem. After all, the digital computer provably contains within itself the capacity to simulate any formal system that can be rigorously described (this is the famous Church-Turing thesis which, in turn, is motivated by the universality of Turing machines and lambda calculus, respectively). Almost 80 years later, the field of software architecture and software engineering is much more modest in its claims about the power of digital computers to revolutionize the world. It is true that computers today calculate problems in the blink of an eye that, a century ago, nobody could have imagined to be practically solvable given any amount of time. But it has turned out that complexity is a savage beast that will eat anything in its path. And even quantum computers cannot escape the complexity beast.

Here is a famous remark by Wilkes on the discovery of the importance of debugging in digital computers. Wilkes describes the moment that he realized that, "a large part of my life from then on was going to be spent finding mistakes in my own programs." This is the reality of complexity, a reality that computer scientists and engineers have nearly a century of experience with. In disciplines like economics, biology and computer science, complexity is taken as a matter of course, an inescapable constraint on any real system, whether discrete, continuous or even quantum! You cannot wish complexity away, you cannot outsmart it, you cannot devise some short, clever algorithm that makes complexity irrelevant, you cannot pontificate upon some abstraction so beautiful that it will, by some black magic, untangle all the apparent complexity of reality and reveal the hidden simplicity that was always there. No, complexity is the residue that still remains after you have simplified, clarified, optimized, reduced and abstracted the problem you are trying to solve. It is the thing that makes any kind of design (from cars to bridges to fisheries to stock markets to computers) irreducibly difficult.

A lot of the claims I see in QC papers/articles remind me of the boundless ambitions I had for my earliest computer programs, when I didn't yet really understand just how incredibly difficult it is to design algorithms and build programs. I recently sketched a diagram on a single sheet of paper describing a simple algorithm that sweeps a matrix in a horizontal-then-vertical fashion. I could describe to anyone with an understanding of high-school algebra the algorithm diagrammed on that sheet of paper in about 10 minutes flat. Five hundred lines of dense code and three days later, I finally produced a functioning, performant prototype of the algorithm described on that sheet of paper. What made the algorithm difficult is that it was an optimization of a much simpler algorithm that I had written in less than 100 lines. Do you see what happened? I started with a problem. I solved it in less than 100 lines of code. Then, I realized that my initial solution was far too slow. So I went and purposely made my life harder by introducing new complexity -- I decided to re-write the original algorithm in a new, more optimal fashion, using a more complex design. The new design could still be described on a single sheet of paper but the effective complexity (once you factored in all the book-keeping details) was many times greater!

And my tiny microcosm of experience is just a reflection of the same pattern that afflicts every real design. Imagine being there when someone finally figured out how to build an internal combustion engine. Hooray! It works! Ah, but what misery lay on the road ahead. Engines won't start when they're cold. Moisture condenses inside of gas tanks and carburetor bodies. Oil gets too little viscosity and fails to lubricate moving parts and engines seize up. By the time all those problems were solved, somebody invented a fuel injector and the race was on to build ever hotter, faster, smaller, lighter, more power-dense engines, a race that continues to this day, aided by what? That's right, highly complex software. So, the gas engine made our life simpler in some ways by making it more complex in other ways. The pie-in-the-sky QC optimists (note: I'm a QC-optimist, I just think there's a huge difference between optimism and hype) seem to think that quantum computers are somehow magically exempt from the constraints of complexity that afflict every other real system... yet they cannot give any reason why we should believe this. Even quantum weirdness is not weird enough to escape the complexity beast... it gobbles up quantum systems and classical systems with equal enthusiasm.

Post Reply