殺手英語(yǔ)怎么說(shuō)
殺手英語(yǔ)怎么說(shuō)
相信大家對(duì)殺手這個(gè)詞一點(diǎn)都不陌生,我們可以從各種電影、電視劇中了解這一特殊職業(yè)。今天學(xué)習(xí)啦小編在這里為大家介紹殺手用英語(yǔ)怎么說(shuō),歡迎大家閱讀!
殺手的英語(yǔ)說(shuō)法
killer
slayer
殺手的相關(guān)短語(yǔ)
殺手本能 Killer Instinct ;
職業(yè)殺手 professional killer
殺手的英語(yǔ)例句
1. The vital clue to the killer's identity was his nickname, Peanuts.
查明殺手身份的重要線索是他的外號(hào)叫“花生”.
2. Depression is the third thing that works to my patients' disadvantage.
抑郁是威脅我的病人健康的第三大殺手。
3. It's a film about a serial killer and not for the faint-hearted.
這部電影是講一個(gè)連環(huán)殺手的,不適合膽小的人看。
4. Heart disease is the biggest killer of men in most developed countries.
在多數(shù)發(fā)達(dá)國(guó)家,心臟病是導(dǎo)致人們死亡的頭號(hào)殺手。
5. A hit man had been sent to silence her over the affair.
為了掩蓋這件事,已經(jīng)派出一名職業(yè)殺手去殺她滅口。
6. Heart disease is the biggest killer, claiming 180,000 lives a year.
心臟病是頭號(hào)殺手,每年奪去18萬(wàn)條生命。
7. Police are theorizing that the killers may be posing as hitchhikers.
警方推測(cè)那些殺手可能會(huì)假裝成搭便車的人。
8. Other officers gave chase but the killers escaped.
其他警官追了上去,可是殺手還是逃了。
9. a cold and calculating killer
一個(gè)工于心計(jì)的冷酷殺手
10. It was the deadly striker's 11 th goal of the season.
這是那個(gè)殺手前鋒本賽季的第11個(gè)進(jìn)球.
11. He is a hired killer.
他是一個(gè)受雇的殺手.
12. They were professional killers who did in John.
殺死約翰的這些人是職業(yè)殺手.
13. She took out a contract on her ex - husband .
她雇了殺手打算謀殺她 前夫.
14. Cannibal killer Jeffrey Dahmer has been caught trying to hide a razor blade in his cell.
食人殺手杰弗里·達(dá)默被抓到試圖將一片剃須刀片藏在牢房里。
15. Their cold-blooded killers had then dragged their lifeless bodies upstairs to the bathroom.
那些冷血?dú)⑹帜菚r(shí)已經(jīng)將他們的尸體拖到樓上浴室里。
殺手英文相關(guān)閱讀:殺手機(jī)器人將是人類噩夢(mèng)
Mankind is a bloodthirsty species. According to Steven Pinker, the academic, for much of history being murdered by a fellow human was the leading cause of death. Civilisation is largely a tale of man’s violent instincts being progressively muffled. A part of this is the steady withdrawal of actual human flesh from the battle zone, with front lines gradually pulled apart by the advent of long-range artillery and air power, and the decline in the public’s tolerance for casualties.
人類是一個(gè)嗜血的物種。根據(jù)學(xué)者史蒂文?平克(Steven Pinker)的說(shuō)法,在歷史的大部分時(shí)間里,被同類所殺是人類的頭號(hào)死因。文明基本上是人的暴力本能被逐漸束縛住的故事。其中一個(gè)部分是有血有肉的人持續(xù)從戰(zhàn)場(chǎng)撤出,前線逐漸被遠(yuǎn)程武器和空中軍事力量拉遠(yuǎn),公眾對(duì)于傷亡的容忍程度也下降了。
Arguably, America’s principal offensive weapon is the drone, firing on targets thousands of miles from where its controller safely sits. Given the pace of advance, it takes no imaginative leap to foresee machines displacing human agency altogether from the act of killing. Artificial brains already perform well in tasks hitherto regarded as the province of humans. Computers will be trusted with driving a car or diagnosing an illness. Algorithmic intelligence could therefore surpass the human sort for making the decision to kill.
可以說(shuō),美國(guó)的主要進(jìn)攻武器是無(wú)人機(jī),操縱者安坐于千里之外對(duì)目標(biāo)進(jìn)行打擊。考慮到技術(shù)進(jìn)步之快,無(wú)需腦洞大開(kāi),我們就能預(yù)見(jiàn)到機(jī)器將可完全代替人類進(jìn)行殺戮。在迄今仍被視為人類專屬的活動(dòng)領(lǐng)域里,人工大腦已有良好表現(xiàn)。電腦將被交托駕駛汽車或者診斷疾病的任務(wù)。因此,在做出殺戮的決策上,算法智能或許也將超越人類智能。
This prospect has prompted more than 1,000 artificial intelligence experts to write calling for the development of “lethal, autonomous weapons systems” to cease forthwith. Act now, they urge, or what they inevitably dub “killer robots” will be as widespread, and as deadly, as the Kalashnikov rifle.
這種可能性,促使1000多名人工智能專家在一封公開(kāi)信中呼吁立即停止發(fā)展“致命自動(dòng)武器系統(tǒng)”。他們敦促稱,現(xiàn)在就行動(dòng),否則被他們不可避免地稱為“殺手機(jī)器人”的武器將和卡拉什尼科夫步槍(Kalashnikov,即AK-47)一樣廣為流傳并造成同樣致命的危害。
It is easy to understand military enthusiasm for robotic warfare. Soldiers are precious, expensive and fallible. Every conflict exacts a heavy toll from avoidable human error. Machines in contrast neither grow weary nor lose patience. They can be sent into places unsafe or even impossible for ordinary soldiers. Rapid improvements in computational power are giving machines “softer” skills, such as the ability to identify an individual, flesh-and-blood target. Robots could eventually prove safer than even the most experienced soldier, for example by being capable of picking out a gunman from a crowd of children — then shooting him.
軍方對(duì)機(jī)器人戰(zhàn)爭(zhēng)的熱衷很容易理解。士兵是寶貴的、成本高昂的,也是會(huì)犯錯(cuò)誤的。本可避免的人為失誤在每一場(chǎng)戰(zhàn)斗中都造成了嚴(yán)重傷亡。相較之下,機(jī)器既不知疲倦,也不會(huì)失去耐心。它們可以被送往不安全甚至普通士兵無(wú)法到達(dá)的地方。計(jì)算能力的迅速提升正賦予機(jī)器“更柔軟”的技能,比如識(shí)別一個(gè)有血有肉的單獨(dú)目標(biāo)。最終,事實(shí)可能將證明機(jī)器人會(huì)比最有經(jīng)驗(yàn)的士兵更安全,比如能夠從一群孩子中挑出槍手——然后射殺他。
The case against robotic warfare is the same that applies to all advances in weaponry, the avoidance of unforeseeable consequences that cause unlimited damage to the innocent. Whatever precautions are taken, there is no foolproof way to stop weapons falling into the wrong hands. For a glimpse into what could go wrong, recall how Chrysler, the US carmaker has needed to debug 1.4m vehicles after finding the car could be remotely hacked. Now imagine it came equipped with guns.
反對(duì)機(jī)器人戰(zhàn)爭(zhēng)的理由與反對(duì)所有武器進(jìn)步的理由相同——避免大量無(wú)辜受到傷害這種不可預(yù)知的后果。無(wú)論采取了什么預(yù)防措施,都沒(méi)有萬(wàn)無(wú)一失的方法來(lái)阻止武器落入不法之徒的手中。要想一窺那種情況下會(huì)有什么后果,可以回憶一下美國(guó)汽車制造商克萊斯勒(Chrysler)在發(fā)現(xiàn)汽車可以被遠(yuǎn)程入侵后,需要檢測(cè)和排除140萬(wàn)輛汽車隱患的事情?,F(xiàn)在,想象一下這些車裝備了槍支。
Technological futurists also fret about the exponential nature of advances in artificial intelligence. The scientist Stephen Hawking recently warned of the “technological catastrophe” that would follow artificial intelligence vastly exceeding the human sort. Whether this is a plumb inevitability or fantasy, science itself cannot decide: but in light of the risk, how sensible can it be to arm such super-intelligences?
技術(shù)未來(lái)主義者也擔(dān)憂人工智能異常快速的發(fā)展??茖W(xué)家斯蒂芬?霍金(Stephen Hawking)最近提醒人們警惕人工智能遠(yuǎn)超人類智能后可能發(fā)生的“科技大災(zāi)難”。這到底是絕對(duì)無(wú)法避免的事情,還是只是幻想,科學(xué)本身無(wú)法確定:但考慮到其中的風(fēng)險(xiǎn),給超級(jí)智能裝備武器能有多明智呢?
The moral argument is more straightforward. The abhorrence of killing has been as important to its decline as any technological breakthrough. Inserting artificial intelligence into the causal chain would muddle the responsibility that must underpin any decision to kill. Without clear responsibility, not only might the means to wage war be enhanced, but so too might the appetite for doing so.
道德方面的理由更為直接。在減少殺人方面,對(duì)殺戮的厭惡是個(gè)重要因素,其作用不亞于任何技術(shù)突破。將人工智能插入這條因果鏈,將弄混殺人決定背后的責(zé)任。沒(méi)有明確的責(zé)任,不僅發(fā)動(dòng)戰(zhàn)爭(zhēng)的手段得到加強(qiáng),發(fā)動(dòng)戰(zhàn)爭(zhēng)的意愿也可能上升。
Uninventing weapons is impossible: consider anti-personnel landmines — autonomous weapons in their way — which are still killing 15,000-20,000 people annually. The nature of artificial intelligence renders it impossible to foresee where the development of autonomous weapons would end. No amount of careful programming could limit the consequences. Far better not to embark on such a journey.
讓武器消失是不可能的:想一想殺傷性地雷——一種自動(dòng)運(yùn)行的武器——現(xiàn)在依然每年造成1.5萬(wàn)到2萬(wàn)人喪生。人工智能的性質(zhì)使人們無(wú)法預(yù)見(jiàn)自動(dòng)武器發(fā)展的終點(diǎn)在哪里。不管進(jìn)行多少精密的編程,也無(wú)法限制其后果。最好不要踏上這樣的旅程。
猜你喜歡: