AI and the rise of China: this is the New New World

Economics, Politics, Philosophy, History and Culture
Forum rules
Always add something of value to the discussion and have fun. Mind your language, no porn, no libel, no trolling and no personal attacks.

Please note, views expressed on the forum do not necessarily represent the views of Mises UK.
User avatar
Clayton
Posts: 84
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and the rise of China: this is the New New World

Post by Clayton » Thu May 10, 2018 1:18 am

Google Duplex + Virtualization = First wave of the job-apocalypse



Virtualization allows a computer session to be run inside of a virtual container. The virtual container is software that simulates the hardware on which an OS would normally run - this is called the hypervisor. A virtual container allows the raw input-output (I/O) stream to be recorded and even replayed. What this means is that the graphical-user interface (GUI) and the user's interaction with that interface can be recorded. OpenAI Universe uses a similar approach by running a VNC server (similar to Windows Remote Desktop) and capturing the output of games and allowing an AI agent to provide input to those games.

What use is it to record a user's interaction with a GUI? Well, let's consider a typical data-entry job. A website is consulted (data source), an Excel spreadsheet is opened, and data from the website is entered into the spreadsheet. The spreadsheet is then synched with a back-end database. This work-flow is amazingly common -- 10's of millions of Americans perform similar tasks daily. But this work-flow no longer requires a human "in the loop". Instead, you just train an AI package to mimic this work-flow. Such training can be performed remotely (just configure the virtual machine to forward the relevant I/O to the site where the AI training is being done), requiring absolutely no interaction with the humans performing the work.

Once the work-flow itself has been trained into the AI, the work-flow can be activated through some kind of management interface - just log in to a Web UI and click the "Fetch Web data and put it into the database" button. Of course, once you have this button, clicking it can be just another task that is part of any automated AI work-flow. There are some limits to what can be automated with this approach - specifically, you can't automate things that require manipulation of a physical object in concert with I/O. However, you can arrange the work-flow to minimize the need for this kind of thing. For example, instead of manually reading pages out of a bound document, you can slice the binding off of it and load it into a scanner and allow the AI to handle the rest (there's nothing stopping the AI from operating the scanner through its web/network interface in concert with other activities).

The innovation that Google Duplex and related technologies will enable is a "natural interface" to AI systems. When I want to adjust my stock portfolio, I need to call my stock broker and explain (in words) what I want to happen. But if the AI can engage in conversation, it can seamlessly connect the "human space" with its virtual AI space. So, the AI can completely replace the job being performed by the front-line phone operator at a financial management firm.

As soon as you start down this path, it is easy to see that there is no stopping point. Humans only need to stay "in the loop" for two primary reasons: liability (decision-making with potential for legal consequences), and physical tasks which robotics lack the dexterity to perform or which are not cost-effective to roboticize. Other than that, every job is a candidate for AI-robotic mechanization and - not only this - we should wish this mechanization to occur as quickly as possible.

The other key thing to realize is that AI-based automation is its own catalyst. Let's consider the task of pulling arbitrary data from a rendered Web page. Currently, we employ humans to perform this task. Many of these tasks could be automated with hand-crafted software that is programmed exactly how to recognize the right box with the right information it is supposed to collect. For the most part, we don't bother writing software like this because the sunk costs of paying coders to write and maintain that software is more costly than just paying a person to do the rote task. AI changes the equation by alleviating the coder of the need to manually code up every specific task. Instead, AI developers can build a general-purpose "data-acquisition package" that knows how to learn what data a person is copying from anywhere (perhaps a Web page) into any running program (perhaps an Excel sheet). Now, training the AI to mimic this person's job is just a matter of recording their session while they are working and training the general-purpose data-acquisition AI on these sessions. You would train a separate instance of the AI for each type of data-acquisition (job/function) it is to perform but the training package itself is pretty much general-purpose. Each of these instances can now be reduced to a push-button on a Web management interface. Here's the punchline: You can apply this very same training flow to the process of training AI packages, itself. So, that's what makes it a catalyst. Once we start doing this, it will become a self-accelerating phenomenon -- a chain-reaction.

Many people will bemoan "job destruction" at the hands of AI and robot-mechanization. Doubtless, there will be people who get hurt along the way. But down this path lies the best thing that can possibly happen to humanity - a new way-of-life characterized by an abundance never before seen.

User avatar
Jon Irenicus
Posts: 92
Joined: Sun Mar 25, 2018 9:36 pm

Re: AI and the rise of China: this is the New New World

Post by Jon Irenicus » Thu May 10, 2018 6:22 pm

I'll repost a study on the topic from not too long ago:

https://www.oxfordmartin.ox.ac.uk/publi ... /view/1314

It does kind of make career moves tougher to plan. For instance, this study puts software development on the lower end of the spectrum for susceptibility to automation. Would you agree with that?

Also:

https://www.technologyreview.com/the-do ... utomation/
Former overlord of the original Mises forum.

User avatar
Clayton
Posts: 84
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and the rise of China: this is the New New World

Post by Clayton » Fri May 11, 2018 5:22 am

Jon Irenicus wrote:
Thu May 10, 2018 6:22 pm
I'll repost a study on the topic from not too long ago:

https://www.oxfordmartin.ox.ac.uk/publi ... /view/1314

It does kind of make career moves tougher to plan. For instance, this study puts software development on the lower end of the spectrum for susceptibility to automation. Would you agree with that?

Also:

https://www.technologyreview.com/the-do ... utomation/
I think it's better to think of automation in terms of work-flows rather than "jobs". By "work-flow", I mean a sequence of steps performed in the course of performing a job (only people can "have a job" in the ordinary sense of working for pay) that could, in principle, be written out in exhaustive detail. In this view, the question is "what percentage of any given job can be automated?" which is not the same thing as asking what percentage of jobs can be automated. Yes, some jobs can be 100% automated. But not even all the jobs which computers could fully automate will be automated - technology is not the only limiting factor, there are liability considerations and many other considerations that come into play. So, I think it's safe to say that the vast majority of jobs will not be 100% automated and, in any case, as we begin to reap the quality-of-life and cost-of-living benefits of AI automation, we are going to realize that we want to have as much automation as possible. Nobody lifts heavy objects that can be carried with the aid of a piece of equipment nowadays. Why risk it? Why strain yourself? Just get the equipment and save time, liability and medical costs.

Rather than looking at it in terms of automation destroying jobs, I would look at it in terms of automation reducing the amount of tedium required to perform any given job. This is an unqualified positive. The tractor did not obsolete plowing, it only obsoleted the horse-drawn plow. Anyone who knew how to operate a horse-drawn plow could easily learn to operate a tractor-drawn plow and become vastly more productive in the process. It is true that you need fewer people in the fields but this only frees those hands up to go do other, probably less labor-intensive work. The same is true of AI, considered as a "brain-tractor". Yes, it will automate the parts of your job that are the most costly, that is, the parts that have the highest disutility (thus, requiring the highest compensation to motivate someone to do them). But that's a good thing for both you and your employer. If you stay in your field, you will be able to do the work of 10 people, and if you move to another field, you will probably be doing work that puts less strain on your brain.

User avatar
Jon Irenicus
Posts: 92
Joined: Sun Mar 25, 2018 9:36 pm

Re: AI and the rise of China: this is the New New World

Post by Jon Irenicus » Fri May 11, 2018 11:20 am

I am my employer. :mrgreen: I work as a freelancer in the banking sector. I’m simply at the point though where I’m considering changing to data analytics or software dev, as what I do right now hasn’t got much durability in it beyond the next few years.

Anyway, I agree with your points re workflows but certainly for some jobs all the workflows will be automatisable and it will be cost effective for this to be done (as above, the MW is one reason for this) whereas for others workflows that make them rewarding might be the ones to go, so I’m actively avoiding respecialising into such areas. Of course there will be those that are enhanced by it too. I don’t perceive automation as a threat but at the same time it is something that should factor into career planning, is it not?
Former overlord of the original Mises forum.

User avatar
Clayton
Posts: 84
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and the rise of China: this is the New New World

Post by Clayton » Fri May 11, 2018 4:49 pm

Jon Irenicus wrote:
Fri May 11, 2018 11:20 am
I am my employer. :mrgreen: I work as a freelancer in the banking sector. I’m simply at the point though where I’m considering changing to data analytics or software dev, as what I do right now hasn’t got much durability in it beyond the next few years.
You use software tools in the course of doing what you do (we can refer to it as your "job", even though you freelance). What I am saying is that those tools might be able to be improved by Deep Learning to become much more powerful than they are today, so that what used to be done by 10 freelancers can now be done by one, using the new tools. Note that this is a generic point, there are myriads of specific exceptions and what you do just might be one of those, I don't know.

If you're looking to make a transition based on potential for pay (as well as aptitude/interest), I would highly recommend taking online courses in ML/AI (Coursera has a University-level offering at very reasonable prices - Khan academy has free courses, and there are other "study by night" options out there). In some ways, machine-learning is actually easier than software development. Software development is like designing the engine for a car. You have to keep track of each and every variable and one mishap anywhere in the design can blow up the entire thing. Machine learning is more about tweaking and tuning hyperparameters or building models using fairly intuitive components (intuitive once you internalize the vocabulary and the overall mental models). I would compare it to being more like sound-editing if you've ever used a sound-editing package like FL Studio... you configure together components (each of which you separately understand) into a combined model to try to achieve an overall desired effect and then you tweak/tune that model until you get the effect you were seeking. So, it's more of a persistence game than software engineering which is often a "memorize everything" game, in practice.
Anyway, I agree with your points re workflows but certainly for some jobs all the workflows will be automatisable and it will be cost effective for this to be done (as above, the MW is one reason for this)
I don't think that automation is going to bring a halt to the MW; in fact, I think that the MW will continue to rise even as automation increases. In "developed" countries, I think the MW conceals the fact that we are wealthy enough that we can collectively afford to engage in self-destructive measures like MW. As automation increases, our capacity to afford such nonsensical public measures will also increase. :(
whereas for others workflows that make them rewarding might be the ones to go, so I’m actively avoiding respecialising into such areas. Of course there will be those that are enhanced by it too. I don’t perceive automation as a threat but at the same time it is something that should factor into career planning, is it not?
It definitely should factor into it. That's why my recommendation to you is to actually look at how you might be able to use your existing skills plus some remedial courses to get into the ML/AI field. Automate your own job away, that's the best thing you can do for yourself financially as well as for the rest of us.

User avatar
Jon Irenicus
Posts: 92
Joined: Sun Mar 25, 2018 9:36 pm

Re: AI and the rise of China: this is the New New World

Post by Jon Irenicus » Fri May 11, 2018 5:16 pm

Ah, my point regarding the MW is that it is propelling the pace of automation by making some forms of labour cost-prohibitive, as per the article I linked. I'm increasingly convinced that the more intelligent/sinister proponents of such measures are actually aware of the fact that they cause unemployment, but much like with Obamacare, want the MW to "break" the system and are aiming for a measure like the UBI to come into place as a result of counterproductive policies like the MW, as it would be a step closer in gaining control over the population by controlling its source of finances.

Anyway, thanks for your recommendations. I'll look into the above online courses to gauge interest. There's actually a lot of options in the UK, including Masters courses compressing the content of a BSc into a single year with an emphasis on software dev, which I think I've mentioned before. Data analytics as a field focuses more heavily on ML, so I am also considering that. I've amassed quite a bit in savings, so I'm inclined to take up to a year off and use it to re-specialise. I got the impression that ML/AI require strong quanitative skills, though, so I was thinking of brushing those up in the meantime.

As it so happens, my present work involves dealing with the blunders poorly applied automation, in combination with human operators, is causing to customer outcomes. :P
Former overlord of the original Mises forum.

User avatar
Clayton
Posts: 84
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and the rise of China: this is the New New World

Post by Clayton » Fri May 11, 2018 6:17 pm

The schemers will scheme. In my view, it's all wasted energy. They feel like they're accomplishing something. In fact, their machinations are all futile and accomplish nothing.
As it so happens, my present work involves dealing with the blunders poorly applied automation, in combination with human operators, is causing to customer outcomes.
Nice... basically, a hazmat cleanup guy. I would say that gives you a lot of insight into how to use automation correctly.

Practically, ML/AI boils down to using a machine-learning software API (Keras, TensorFlow, PyTorch, see here) to create a software product that can take inputs of a given data-type (say, decimal numbers dumped from a spreadsheet) and create outputs of a desired type (say, decimal numbers suitable to be imported into another spreadsheet). I like to envision the flow of data through an ML design by analogy to a system of water pipes - the data "flows" in, it gets processed in various ways, then the main neural nets transform the data from its input form (raw data) into the desired output (perhaps labels or probability estimates), and it undergoes a final output transformation and then "flows" to the output. The person writing ML software is basically choosing which processing components to use and how to connect the pipes together to get the desired flow from input to output and then "training" those core neural nets to try to get the whole system to work properly.

Here's a 1-minute description of pretty much everything you need to know about neural nets in order to start using ML packages:



For the details of using real-world packages, I like Andrew Ng's teaching style and he has oodles of free course-videos on YouTube. Highly recommend.

I also recommend this link as a great cheat-sheet for getting a feel for the various kinds of NNs out there. The biggest ones to keep in mind are:

- Feed-foward (FF) neural nets; these pretty much act like an arbitrarily complicated mathematical function y=f(x); you train the network with pairs of x,y and it "learns" the underlying function f()

- Convolutional neural nets (CNN); these NNs break up a problem in space, especially 2D space, and are heavily used in image-recognition where you want to perform pretty much the same function on every part of an image by slicing the image up into a grid of pixel-blocks and train the neural net to only look at each block and its immediate neighbors. You can send the outputs of the CNN as it is applied across the entire image to a FF neural net which will then convert the raw output of the CNN into labels, like "dog", "cat", "house", "brick" and so on.

- Recurrent neural nets (RNN); these NNs break up a problem in time. Their power is mostly unexplored to date, partly because they are compute-intensive to train, and partly because we really have no idea how much it is possible to accomplish with an RNN - at a very high-level, the brain itself can be thought of as a kind of RNN. Adding LSTM (Long Short-Term Memory, the worst name ever for a beautiful concept) allows an RNN to learn and track patterns across very long spans of time, so it can be trained to pick up on surprisingly sophisticated things like matching opening and closing parentheses in free text, for example.

These are the heavy-hitters and there are are many other niche neural nets that can perform other important tasks that these types of neural nets are not as good at. The ML APIs out there remove the need to understand the minutiae of what these neural nets are doing and only require a conceptual understanding of which nets are best suited to performing which data-processing tasks.

Hope that info helps you.

User avatar
Jon Irenicus
Posts: 92
Joined: Sun Mar 25, 2018 9:36 pm

Re: AI and the rise of China: this is the New New World

Post by Jon Irenicus » Fri May 11, 2018 6:35 pm

Do you think "hooking up" the brain to a technology like the internet - however it is accomplished - presents some significant risks? In that, for instance, although you may be able to harness the power of such a vast network in retrieving information or communicating with others, it could also become a rather literal form of controlling what people think? Plus the very real loss of privacy that might entail. I'm wondering whether by forfeiting some of our frailties, in becoming more integrated with machines, we'd also become more vulnerable to some of their frailties. Being "hacked", for instance, would take on a whole new meaning in terms of the damage a hacker could cause.
Nice... basically, a hazmat cleanup guy. I would say that gives you a lot of insight into how to use automation correctly.
Kind of, I mean a big part of it is still dealing with the human element, but because of how heavily some banks employ automation, it features very prominently in their operations, and yes, it's quite easy in some cases to take a step back and see where it all goes wrong. Anyway, thanks for the info, I do find it useful and I'll look into it further later. I find data processing and manipulation to be a pretty fascinating topic, especially in terms of how you can get a machine to plough through swathes of data to pull out meaningful/relevant information. Makes you wonder how we got by before computers. :mrgreen:
Former overlord of the original Mises forum.

User avatar
Clayton
Posts: 84
Joined: Sat Mar 31, 2018 4:24 pm

Re: AI and the rise of China: this is the New New World

Post by Clayton » Sat May 12, 2018 12:12 am

Jon Irenicus wrote:
Fri May 11, 2018 6:35 pm
Do you think "hooking up" the brain to a technology like the internet - however it is accomplished - presents some significant risks?
Yes. The open question is how to mitigate these risks. Elon Musk believes the answer is to hook ourselves up as much as possible as fast as possible so that we can "keep pace" with the machines (by augmenting ourselves with the latest capabilities present in each new generation of machines). Search "neural lace". An approach I find more compelling is setting the machines on each other (in a virtual environment, of course) and letting them battle it out while we watch from "on high". While the machines will get smarter than us (in any particular domain), we will still be able to get an overall feel of their intentions. It's basically divide et impera applied to things that are smarter than us but are handicapped by having gotten a late start (on the evolutionary timeline).
In that, for instance, although you may be able to harness the power of such a vast network in retrieving information or communicating with others, it could also become a rather literal form of controlling what people think? Plus the very real loss of privacy that might entail.
See my other thread, "Technological Utopia and Eternal Life" - what you have described is precisely what is going to happen. Much more than this, actually: first, we will lose privacy, then we will lose agency (to "mind-control", if you will), then we will lose our humanity (with the emergence of the beast-mind) and, finally, we will lose our sense of a local identity ("the self", per se).

On the surface, these sound like really terrifying things. But suppose you had a time-machine and flew back to 5,000 AD and visited one of the great Pharaohs of Egypt. You show him your tablet and describe the modern world. Amazed, he asks to travel back with you. You inform him of the risks and tell him that he can only stay for a few hours. You take him back to the present and show him sky-scrapers, movie theaters, and a selection of the wonders of the modern world. When he asks how the world was able to transform from his time into our time, you give him the grisly details - the empires, the genocides, the global plagues, the endless wars, industrialization, tanks, guns, bombs, chemical and nuclear weapons, satellite guidance, and so on. His eyes glazed, he stumbles back and concludes that the future is hell and he would rather stay in Egypt even though its glory can never compare to the technological wizardry of the modern world.

I believe we are so close to the technological singularity that we are actually able to peer into it. Even though it sounds terrifying, it is no worse than the growing pains that humanity has already endured. Hopefully, even at its worst, it will not be as bad. But it will be utterly transformative. I am convinced that the world, 100 years from now, will be far more unrecognizable to us, today, than the modern world would be to our time-traveling Pharaoh.
I'm wondering whether by forfeiting some of our frailties, in becoming more integrated with machines, we'd also become more vulnerable to some of their frailties. Being "hacked", for instance, would take on a whole new meaning in terms of the damage a hacker could cause.
As you know, I look at this whole topic through both a technological and spiritual lens. It goes back to that quote from Clarke: "Any sufficiently advanced technology is indistinguishable from magic." I think that the root nature of the Universe is to nudge us along from lower technology to higher technology - towards the "magic", as it were. But the danger in such nudging is that it can devolve into dazzling people into a catatonic state, or "punking" people - something that colonizers, especially European colonizers, have been massively guilty of when colonizing lower technology cultures. Perhaps what our ancestors understood as "demon possession", for example, is a manifestation of hacking from a higher technology than we understand today. Phenomenologically, there is no point in making a distinction.
how you can get a machine to plough through swathes of data to pull out meaningful/relevant information. Makes you wonder how we got by before computers. :mrgreen:
What is amazing about computers is that they are absolutely indefatigable. For many problems that computers can solve, the human brain can solve them with much less expenditure of energy (e.g. chess-playing software or image-recognition). But the human brain cannot tirelessly churn through an ocean of data or combinatorial solutions. What we want is the best of both worlds - a mind that can operate as efficiently as the human brain, while churning through endless possibilities without fatigue.


Post Reply