• 2022-06-19 问题

    过安检时,如果“the detector beeps”,工作人员可能会问什么?( ) A: Could you please empty your pockets? B: Do you have any liquids? C: Are you wearing a belt? D: Could you please open your carry-on bag?

    过安检时,如果“the detector beeps”,工作人员可能会问什么?( ) A: Could you please empty your pockets? B: Do you have any liquids? C: Are you wearing a belt? D: Could you please open your carry-on bag?

  • 2022-06-19 问题

    The participants tended to perceive irregular beeps as a threat. 未知类型:{'label': 'questionDesc', 'content': '请根据所给材料判断题干是否正确。', 'isMemberControl': 0, 'type': 181} 未知类型:{'label': 'source', 'content': '2017年12月 四级 卷一 仔细阅读1', 'isMemberControl': 0, 'type': 181}

    The participants tended to perceive irregular beeps as a threat. 未知类型:{'label': 'questionDesc', 'content': '请根据所给材料判断题干是否正确。', 'isMemberControl': 0, 'type': 181} 未知类型:{'label': 'source', 'content': '2017年12月 四级 卷一 仔细阅读1', 'isMemberControl': 0, 'type': 181}

  • 2021-04-14 问题

    Passage OneQuestions 1 o 5 are based on the following passage.As Artificial Intelligence (AI) becomes increasingly sophisticated, there are growing concerns that robots could become a threat. This danger can be avoided, according to computer science professor Stuart Russell, if we figure out how to turn human values into a programmable code.Russell argues that as robots take on more complicated tasks, it's necessary to translate our morals into AI language.For example, if a robot does chores around the house, you wouldn't want it to put the pet cat in the oven to make dinner for the hungry children. “You would want that robot preloaded with a good set of values,” said Russell.Some robots are already programmed with basic human values. For example, mobile robots have been programmed to keep a comfortable distance from humans. Obviously there are cultural differences, but if you were talking to another person and they came up close in your personal space, you wouldn't think that's the kind of thing a properly brought-up person would do.It will be possible to create more sophisticated moral machines, if only we can find a way to set out human values as clear rules.Robots could also learn values from drawing patterns from large sets of data on human behavior. They are dangerous only if programmers are careless.The biggest concern with robots going against human values is that human beings fail to do sufficient testing and they've produced a system that will break some kind of taboo(禁忌).One simple check would be to program a robot to check the correct course of action with a human when presented with an unusual situation.If the robot is unsure whether an animal is suitable for the microwave, it has the opportunity to stop, send out beeps(嘟嘟声), and ask for directions from a human. If we humans aren't quite sure about a decision, we go and ask somebody else.The most difficult step in programming values will be deciding exactly what we believe in moral, and how to create a set of ethical rules. But if we come up with an answer, robots could be good for humanity.

    Passage OneQuestions 1 o 5 are based on the following passage.As Artificial Intelligence (AI) becomes increasingly sophisticated, there are growing concerns that robots could become a threat. This danger can be avoided, according to computer science professor Stuart Russell, if we figure out how to turn human values into a programmable code.Russell argues that as robots take on more complicated tasks, it's necessary to translate our morals into AI language.For example, if a robot does chores around the house, you wouldn't want it to put the pet cat in the oven to make dinner for the hungry children. “You would want that robot preloaded with a good set of values,” said Russell.Some robots are already programmed with basic human values. For example, mobile robots have been programmed to keep a comfortable distance from humans. Obviously there are cultural differences, but if you were talking to another person and they came up close in your personal space, you wouldn't think that's the kind of thing a properly brought-up person would do.It will be possible to create more sophisticated moral machines, if only we can find a way to set out human values as clear rules.Robots could also learn values from drawing patterns from large sets of data on human behavior. They are dangerous only if programmers are careless.The biggest concern with robots going against human values is that human beings fail to do sufficient testing and they've produced a system that will break some kind of taboo(禁忌).One simple check would be to program a robot to check the correct course of action with a human when presented with an unusual situation.If the robot is unsure whether an animal is suitable for the microwave, it has the opportunity to stop, send out beeps(嘟嘟声), and ask for directions from a human. If we humans aren't quite sure about a decision, we go and ask somebody else.The most difficult step in programming values will be deciding exactly what we believe in moral, and how to create a set of ethical rules. But if we come up with an answer, robots could be good for humanity.

  • 2021-04-14 问题

    利用6.4.4节所介绍阅读技巧,完成理解 What does the author say about the threat of robots? 阅读原文:As Artificial Intelligence(AI) becomes increasingly sophisticated, there are growing concerns that robots could become a threat. This danger can be avoided, according to computer science professor Stuart Russell, if we figure out how to turn human values into a programmable code. Russell argues that as robots take on more complicated tasks, it’s necessary to translate our morals into AI language. For example, if a robot does chores around the house, you wouldn’t want it to put the pet cat in the oven to make dinner for the hungry children. “You would want that robot preloaded with a good set of values,” said Russell. Some robots are already programmed with basic human values. For example, mobile robots have been programmed to keep a comfortable distance from humans. Obviously there are cultural differences, but if you were talking to another person and they came up close in your personal space, you wouldn’t think that’s the kind of thing a properly brought-up person would do. It will be possible to create more sophisticated moral machines, if only we can find a way to set out human values as clear rules. Robots could also learn values from drawing patterns from large sets of data on human behavior. They are dangerous only if programmers are careless. The biggest concern with robots going against human values is that human beings fail to so sufficient testing and they’ve produced a system that will break some kind of taboo(禁忌). One simple check would be to program a robot to check the correct course of action with a human when presented with an unusual situation. If the robot is unsure whether an animal is suitable for the microwave, it has the opportunity to stop, send out beeps(嘟嘟声), and ask for directions from a human. If we humans aren’t quite sure about a decision, we go and ask somebody else. The most difficult step in programming values will be deciding exactly what we believe in moral, and how to create a set of ethical rules. But if we come up with an answer, robots could be good for humanity.

    利用6.4.4节所介绍阅读技巧,完成理解 What does the author say about the threat of robots? 阅读原文:As Artificial Intelligence(AI) becomes increasingly sophisticated, there are growing concerns that robots could become a threat. This danger can be avoided, according to computer science professor Stuart Russell, if we figure out how to turn human values into a programmable code. Russell argues that as robots take on more complicated tasks, it’s necessary to translate our morals into AI language. For example, if a robot does chores around the house, you wouldn’t want it to put the pet cat in the oven to make dinner for the hungry children. “You would want that robot preloaded with a good set of values,” said Russell. Some robots are already programmed with basic human values. For example, mobile robots have been programmed to keep a comfortable distance from humans. Obviously there are cultural differences, but if you were talking to another person and they came up close in your personal space, you wouldn’t think that’s the kind of thing a properly brought-up person would do. It will be possible to create more sophisticated moral machines, if only we can find a way to set out human values as clear rules. Robots could also learn values from drawing patterns from large sets of data on human behavior. They are dangerous only if programmers are careless. The biggest concern with robots going against human values is that human beings fail to so sufficient testing and they’ve produced a system that will break some kind of taboo(禁忌). One simple check would be to program a robot to check the correct course of action with a human when presented with an unusual situation. If the robot is unsure whether an animal is suitable for the microwave, it has the opportunity to stop, send out beeps(嘟嘟声), and ask for directions from a human. If we humans aren’t quite sure about a decision, we go and ask somebody else. The most difficult step in programming values will be deciding exactly what we believe in moral, and how to create a set of ethical rules. But if we come up with an answer, robots could be good for humanity.

  • 1