Today, in quite a few cases, it is becoming increasingly difficult to tell whether you are interacting with a human or an algorithm. Deciding who is trustworthy, getting the right information, and reading the right "trust signals" is hard enough for humans. But when we start outsourcing trust to algorithms, how do we trust their intent? When an algorithm or robot makes a decision on your behalf that you disagree with, who is to blame if there are unintended consequences?
There are three technological milestones that need to be crossed for artificial intelligence to do many things that are reassuring for humans.
First, how to deal with the problem of how much artificial intelligence there is. Traditionally, artificial intelligence has been given a false image by the entertainment industry, leading us to think that it is technology that can function without human input.
Now we claim to have entered the era of big data, but more data does not mean high quality, and the availability of unlabeled data is very low. Data labeling is repetitive work, but it is the starting point for most AI applications. Artificial intelligence, in the guise of better freeing people from repetitive tasks, now has a paradox: It takes a lot of repetitive human work to make machines intelligent.
While tasks such as data labeling will gradually be taken over by AI, looking into the future, AI will always require human input and expertise to reach its full potential in an ethical, responsible and safe manner. For example, in social media, humans are required to correct the extreme polarization of algorithms; in medicine, the combined efforts of humans and machines will have a greater effect than either party alone can achieve; in areas such as autonomous driving, artificial intelligence Superiority is the result of training, however once something happens that the AI has to deal with without being trained, the AI-driven advantage is erased. When this happens, the processing power of AI must give way to human creativity and adaptability.
Therefore, artificial intelligence must deal with the relationship between artificial intelligence and intelligence. Ultimately, AI will always require human input as long as there are new applications for AI to learn and new tasks for AI to master.
Second, AI has to learn how to learn. Most of the AI technologies available today learn or optimize their activities based on specific goals, and thus can only do as they are taught. Their capabilities reflect the training data and its quality, as well as the design of the AI process. As mentioned earlier, usually, exceptions still need to be handled manually.
This means that AI's current situation is narrow, dedicated to a specific application, and the processes and procedures it follows are not transferable. But when AI starts to get really smart and able to learn behaviors that haven't been taught, what will it do to human cognition? What is the role of ethics in this essentially accelerated selection process?
Third, how to solve the dilemma of "don't know". "Unknown unknown" was a famous quote by former U.S. Defense Secretary Donald Rumsfeld in February 2002 when he responded to a reporter's question. In 2002, the United States planned to go to war with the Iraqi government on the grounds that it possessed weapons of mass destruction and supported terrorists. When asked about the evidence, then Defense Secretary Donald Rumsfeld explained: "As far as we know, there are 'known knowns', some things, we know we know; we also know, there are' Known unknowns', that is, there are things we now know we don't know. But there are also 'unknown unknowns' - things we don't know we don't know."
The inner workings of AI systems are often opaque, Humans have a hard time understanding how AI learning systems arrive at their conclusions. To paraphrase Rumsfeld, this is a typical "unknowing". So, one of the technical challenges that AI has to overcome is the understanding gap compared to humans themselves. To address this, designers and observers have been discussing the need for some level of interpretive logic in AI systems in order to check for errors and allow humans to learn and understand.
.jpeg)