AI and technology thread

Economics, Politics, Philosophy, History and Culture
Forum rules
Always add something of value to the discussion and have fun. Mind your language, no porn, no libel, no trolling and no personal attacks.

Please note, views expressed on the forum do not necessarily represent the views of Mises UK. the Mises UK Council, Mises UK Faculty or Mises UK members. Membership of this forum is open to anyone world wide who is interested in civil discussion.
Post Reply
User avatar
Clayton
Posts: 122
Joined: Sat Mar 31, 2018 4:24 pm

AI and technology thread

Post by Clayton » Fri Sep 21, 2018 2:00 am

No dark corner: China's digital totalitarianism

Moore's law has been dead for years - This signals that we are on the verge of a sea-change in computing technology. The oceans of money invested into mining Moore's law on silicon-based processes is branching out into other technologies. Some of these alternatives have the potential to achieve many orders of magnitude higher computational capacity than silicon-based computers.

Photonic quantum neural-nets - Several breakthrough technologies are beginning to merge together. I am convinced that many of the predictions around quantum computing are significantly over-optimistic and the inflated claims are being driven by the end of Moore's law (every alternative to silicon technology is locked in a bidding war, trying to attract that capital to itself). But it will be some cross-product of these emerging technologies (Deep Learning, Quantum Computing, classical alternatives to silicon technology) that is going to make the technological singularity possible.

User avatar
Clayton
Posts: 122
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and technology thread

Post by Clayton » Sat Sep 22, 2018 2:00 am

Good overview of a lot of the issues facing technology and AI development in the current status quo:



The "AI is a scam" mantra is nonsense. AI is, indeed, coming of age. This is obvious -- you can talk to your phone, the ads you receive are targeted based on AI algorithms, much of the video and audio processing that was done by hand just 15 years ago is automated by AI-based algorithms, and so on. The idea of AI as the god-in-the-machine is a strawman, anyway. Who says we even want such a thing? AI is not about building a super-intelligent person, it's about harnessing the essence of automated information-processing in a way that maximally facilitates human ends.

That said, AI is not a panacea. AI is going to help us solve every problem which we can well define. In fact, it is not difficult to imagine a near future where no well-defined problem is ever manually calculated except as an educational exercise or for recreation. But many of the most important problems facing humanity are problems that we have difficulty even exactly defining. What exactly is racism? What exactly is sexism? What exactly is bigotry? What exactly is love? We can continue on like this for days. AI cannot solve these problems for us. But it may just help us build a material foundation upon which we can begin to build real change by finding meaningful and universally acceptable solutions to them.

---

Here's a light-hearted application of AI which potentially has serious applications and also illustrates the general sweep of what AI is making us capable of in the future. Today, we're using these AIs to build silly little toy applications that will convert your sketches into vector graphics pictures -- soon, we will have serious AI applications capable of automating the process of taking a car from a concept drawing to a 3D CAD proof-of-concept model, ready for 3D-printing, or taking story-boards for an animated movie and converting them to first-draft 3D animated scenes. This illustrates why I keep underlining that we are only just beginning to scratch the surface of the coming AI boom.


User avatar
Clayton
Posts: 122
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and technology thread

Post by Clayton » Wed Sep 26, 2018 3:54 am

OpenAI's DOTA2 project is learning at a rate of 180 years per day:



And that's just one AI project at one AI firm. I'm not sure where the "AI winter is coming" narrative is coming from, but anyone who buys into that nonsense has their head buried in the sand. AI is accelerating so fast, it's positively staggering. I comb through machine-learning resources online on pretty much a daily basis, just to keep abreast of the latest developments. I can hardly keep up with the latest developments, and I'm not talking about "2% improvement on this academic benchmark" developments (those count, too, but that's not what I track) -- I'm talking about the big architectural breakthroughs, such as the presentation on advances in reinforcement learning by DeepMind above. We are reaching a point where basically every major AI-related technology is just a matter of throwing more compute resources at the problem. That's huge because that reduces the problem to "copy and paste"... and "copy and paste" is the easiest thing in the world in the digital age. We now know that all we have to do is just keep throwing more and more GPUs at these problems and they will yield. A high-end GPU is in the $3-$5k range. That's the cost of a new paint job on a fleet vehicle. 10 GPUs is the cost of a new fleet pickup. AWS allows the costs of computing to be completely amortized to an hourly rate. So, corporations can just calculate the cost/benefit tradeoff of investing in new AI ventures. Data-acquisition is not free but it's becoming cheaper every day, certainly at a much higher rate than the now-dead Moore's Law. AI investment is bubbly, of course, but it's still true that we are on the early, early upturn for AI. The closest comparison I can think of is the discovery of oil and the proliferation of oil-well technology in the late-19th, early 20th century. That might sound like an exaggeration but I am convinced that it will prove to be an understatement, if anything.


User avatar
Clayton
Posts: 122
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and technology thread

Post by Clayton » Sun Oct 07, 2018 8:15 pm

Thank you, YouTube-recommendations:



I want to tack on to Diamandis's remarks by noting that intelligence is inherently scale-invariant. We see this in animal species where flock/herd/school behavior actually makes the individual organisms safer by making predation more difficult for predators, using "peer-to-peer, distributed" methods. The idea of the wisdom of crowds comes into play, as well, but above all of this is the so-called network effect. The more connections my brain has to other brains, the more value my good ideas can inject into the entire system. Of course, we have to make some assumptions about the overall prevalence of good ideas versus bad ideas, the rate at which good ideas are repeated and the prevalence of adversarial agents within the system. But assuming that the human population can jump these relatively modest hurdles, our collective intelligence as a globally-connected, intelligent species will be vastly higher than the sum of our individual intelligences.

Soon, we will be working in a world where every researcher in every lab is always working at the bleeding-edge of his/her field -- we're talking 100% online, up-to-the-minute experimental methods and results that are universally available to every brain on the planet with an interest in the field. Coursera and similar online classrooms are the first step towards "Matrix-style learning" where you just select a subject to learn and "blast" it into your brain. (No, nobody's figured out how that will work, yet, but an obvious first step is compressing the amount of time-investment required for an individual to learn any given technical subject... online, at-your-own-pace classrooms do this).

I read some excerpts from the excellent book Algorithms to Live By, and I think this title captures the essence of what I'm talking about. Let's say you woke up this morning and discovered that a mysterious Tesla-like genius had bequeathed you a massively parallel computer with 7-billion CPUs. What would you do with it? How would you get maximum bang out of this system, given its theoretical compute capacity? Well, the first thing you want to do is get all the CPUs talking to each other, but without raising the noise-floor in the system too much (that last bit is the tricky part). Next, you want to get each CPU working on whatever it is doing as efficiently as possible. This is where "Algorithms to Live By" comes in. The more efficiently each CPU in your 7-billion-node super-computer makes decisions and completes the tasks it chooses to take up (without introducing noise/degradation into any other CPU), the more efficient the overall system will be (Pareto principle). Finally, you would want to think about how to achieve Zen. I can't find the quote right now, but I implemented the Boyer-Moore string search algorithm a while ago and came across a quote that said about it, "The most efficient algorithm is the algorithm that does as little as possible" or "The art of algorithm optimization is the art of doing nothing." You get the idea. This is because the BM string search algorithm actually goes faster the longer the string you want to search, which is the opposite of what you expect from a naive string search algorithm. Often, when we assess the inherent difficulty of a particular goal, we arrive at this assessment on the basis of a lot of unjustified assumptions about the world. Simply changing your assumptions can dramatically reduce the cost of achieving a goal that everyone else believes to be an insurmountable difficulty.

User avatar
Jon Irenicus
Posts: 163
Joined: Sun Mar 25, 2018 9:36 pm

Re: AI and technology thread

Post by Jon Irenicus » Mon Oct 08, 2018 5:53 pm

Former overlord of the original Mises forum.

User avatar
Clayton
Posts: 122
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and technology thread

Post by Clayton » Thu Oct 11, 2018 1:58 am

@Jon: Link doesn't work.

Fortune: Can the whales of Bitcoin tank the market?

Don't get me wrong, crypto is still a super risky investment. But it's not as scary as the anti-crypto people would have you believe.

Alexa, Should We Trust You?


User avatar
Clayton
Posts: 122
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and technology thread

Post by Clayton » Fri Nov 16, 2018 1:49 am

The Case Against Quantum Computing, by Mikhail Dyakonov - The proposed strategy relies on manipulating with high precision an unimaginably huge number of variables

The analogy that kept popping up in my mind as I was reading this article was, "cold fusion". Of course, the difference is that QC has a fully functioning theoretical and empirical foundation... but there are still many parallels.

I'm not as pessimistic about QC as Dyakonov seems to be, but someone really needs to throw a wet blanket on the out-of-control QC hype. Yes, we have research prototypes that really do QC. There's no need to "prove" it via "quantum supremacy"... the fact is, QC is real and QC works. That said, QC is still moonshot technology. Yet much of the breathless hype surrounding it gives the impression that productization, in one form or another, is imminent. This is simply not true. It's not imminent, it's not close to imminent and, in fact, no one on the planet has laid out a complete, credible roadmap from the status quo to a productizable QC (D-Wave being the black-sheep exception that proves the rule). It is plain to see that QC needs to incubate a while longer before it can enter maturity.

Post Reply