View Full Version : Robots Apply Scientific Theory, Make Discovery

Ed Jewett
01-08-2010, 02:22 PM
Robotic System Makes a Novel Scientific Discovery (http://cryptogon.com/?p=12927)

January 8th, 2010 Dress up brute force (http://en.wikipedia.org/wiki/Brute-force_search) real pretty and put lipstick on it.
Via: Wales Online (http://www.walesonline.co.uk/news/wales-news/2010/01/07/welsh-robot-adam-takes-a-i-to-the-next-level-91466-25543300/):
IT was hailed as taking artificial intelligence to a new level.
Now, the creation by Welsh scientists of the first robot in the world to make an independent scientific discovery has been named the fourth most significant discovery of 2009 by one of the world’s most influential magazines.
Adam, a computer that fully automates the scientific process, discovered in April how a baker’s yeast converts food like sugar into the amino acid lysine to produce the protein in bread.
The robot then devised experiments to test its predictions, ran experiments using laboratory robotics and interpreted the results, before repeating the cycle.
Adam was designed by Professor Ross King and colleagues at the Department of Computer Science at Aberystwyth University to carry out each stage of the scientific process automatically without the need for further human intervention.
Its success was placed ahead of the discovery of water on the moon and the progress made this year at the large hadron collider in Switzerland – project managed by Welsh scientist Dr Lyn Evans – in Time magazine’s 10 most significant scientific discoveries of 2009. However, its importance was ranked behind the discoveries of our oldest human ancestor and a potential cure for colour blindness.
Time said of Adam: “By any standard, it was an elementary discovery — the identification of the role of about a dozen genes in a yeast cell. But what made this finding a major breakthrough was the unlikely form of the scientist: a robot.
“In April, Adam became the first robotic system to make a novel scientific discovery with virtually no human intellectual input. Robots have long been used in experiments – their vast computational power assisted in the sequencing of the human genome, for example – but Adam was the first to complete the cycle from hypothesis to experiment to reformulated hypothesis without human intervention.”
Adam’s discovery was published in the journal Science in April. The scientists chose yeast because its genes provide a simple model of how human cells work. Many of the reactions within yeast are replicated within human cells.
Adam is still a prototype, but Prof King’s team believe their next robot, Eve, holds great promise for scientists searching for new drugs to combat diseases such as malaria.
Posted in Rise of the Machines (http://cryptogon.com/?cat=3), Technology (http://cryptogon.com/?cat=12)



Comment: No word in yet as to whether this new technology will be applied to the conundra of Dealey Plaza, 9/11 et alia .

Ed Jewett
01-08-2010, 03:48 PM
In a similar vein, on the other hand:

There’s a triple-header over at “Danger Room” that’s good head-scratching for anyone deeply into 4GW, 5GW, cyber-warfare, or otherwise monitoring WTFIRGDAH.

The first [ http://www.wired.com/dangerroom/2010/01/predicting-insurgencies-easy/ ] is about New Zealand-based physicist Sean Gourley’s tidy-looking equation [see his TED video at http://www.wired.com/dangerroom/2009/05/physicists-fool-proof-war-forumla-just-add-media-accounts/ ] based on the idea that insurgencies are “an ecology of dynamically evolving, self-organized groups following common decision-making processes.”

Gourley and company collected data on 54,679 “violent events” reported in nine different conflicts, including those in Iraq, Afghanistan, Peru and Colombia. The selected events were mostly fatal, because, apparently, “injuries are harder to cross-check.” After finding similarities between insurgent attacks in different conflicts, the team came up with a mathematical “feedback loop” model, based on two variables: “global signal” and “internal competition.”

The second [ http://www.wired.com/dangerroom/2010/01/visualizing-the-underwear-bombers-online-life/ ] notes a declassified intelligence report today on Detroit terror suspect Umar Farouk Abdulmutallab, a.k.a. the Underwear Bomber, and involves a look at Abdulmutallab’s online life — and possibly, his increasing radicalization. Following his use of the online handle “Farouk1986,” Abdulmutallab was a regular on the Islamic forum Gawaher.com (http://www.gawaher.com/), where he appears to have posted 310 times (http://www.msnbc.msn.com/id/34618228/ns/us_news-washington_post/) between 2005 and 2007. Thanks to the Evan Kohlmann of the NEFA Foundation, we now have all of Farouk1986’s posts, assembled into a single file (http://www.wired.com/dangerroom/2010/images_blogs/dangerroom/2009/12/farouk1986.zip). The CLS Blog took this one step further (http://computationallegalstudies.com/2010/01/06/the-time-evolving-structure-of-the-gawaher-islamic-forum-as-experienced-by-umar-farouk-abdulmutallab-the-christmas-day-bomber/), generating a basic visualization and analysis of the structure of Farouk1986’s online communication network as it evolved over time.

To do that, the CLS Blog expanded on the NEFA dataset to map out Farouk1986’s secondary and indirect communications and generate deeper context. “In order to obtain a better understanding of this communication network, we retrieved every ‘topic’ in which Farouk1986 participated at least once,” the authors write. “Each ‘topic’ is comprised of one or more ‘posts’ from one or more users. Each ‘post’ may be in response to another user’s ‘post.’ The NEFA data contains only posts made by Farouk1986 – our data contains the entire context within which his posts existed.”

So what does this add to the understanding of the man who attempted to take down Northwest Airlines flight 253? For starters, Farouk1986 appeared to have joined an existing online network that moved his life in a more religious direction. Once he joined that network, his online interactions became more stable. Put otherwise, it may reflect the tendency of online behavior to become a “feedback loop.” Instead of expanding his apparent network of contacts, it became more exclusive and self-reinforcing.

Finally, Danger Room tells us [ http://www.wired.com/dangerroom/2010/01/obama-software-flaws-let-christmas-bomber-get-through/ ] that crappy government software — and failure to use that software right — almost got 289 people killed in the botched Christmas day bombing. The problem was in the databases, and in the data-mining software. “Information technology within the CT [counterterrorism] community did not sufficiently enable the correlation of data that would have enable analysts to highlight the relevant threat information.” President Obama ordered the Director of National Intelligence to “accelerate information technology enhancement, to include knowledge discovery, database integration, cross-database search, and the ability to correlate biographic information with terrorism-related intelligence.”

All of which will be helpful. But analysts have to actually use the tools. That didn’t happen in the Christmas attack. “NCTC and CIA personnel who are responsible for watchlisting did not search all available databases,” the White House noted.

### ### ###

Personally, it sounds to me like these folks have too much time on their hands and that thjey need to stay home more often to help with the laundry and the child care.

Magda Hassan
01-08-2010, 08:52 PM
There’s a triple-header over at “Danger Room” that’s good head-scratching for anyone deeply into 4GW, 5GW, cyber-warfare, or otherwise monitoring WTFIRGDAH......

Personally, it sounds to me like these folks have too much time on their hands and that they need to stay home more often to help with the laundry and the child care. You are not wrong there Ed! :tee:

Keith Millea
01-22-2010, 06:26 PM
You can't appeal to robots for mercy or empathy - or punish them afterwards

by Johann Hari
In the dark, in the silence, in a blink, the age of the autonomous killer robot has arrived. It is happening. They are deployed. And - at their current rate of acceleration - they will become the dominant method of war for rich countries in the 21st century. These facts sound, at first, preposterous. The idea of machines that are designed to whirr out into the world and make their own decisions to kill is an old sci-fi fantasy: picture a mechanical Arnold Schwarzenegger blasting a truck and muttering: "Hasta la vista, baby." But we live in a world of such whooshing technological transformation that the concept has leaped in just five years from the cinema screen to the battlefield - with barely anyone back home noticing. http://www.commondreams.org/files/images/killer_robot_small.jpg
When the US invaded Iraq in 2003, they had no robots as part of their force. By the end of 2005, they had 2,400. Today, they have 12,000, carrying out 33,000 missions a year. A report by the US Joint Forces Command says autonomous robots will be the norm on the battlefield within 20 years.
The Nato forces now depend on a range of killer robots, largely designed by the British Ministry of Defence labs privatised by Tony Blair in 2001. Every time you hear about a "drone attack" against Afghanistan or Pakistan, that's an unmanned robot dropping bombs on human beings. Push a button and it flies away, kills, and comes home. Its robot-cousin on the battlefields below is called SWORDS: a human-sized robot that can see 360 degrees around it and fire its machine-guns at any target it "chooses". Fox News proudly calls it "the GI of the 21st century." And billions are being spent on the next generation of warbots, which will leave these models looking like the bulky box on which you used to play Pong.
At the moment, most are controlled by a soldier - often 7,500 miles away - with a control panel. But insurgents are always inventing new ways to block the signal from the control centre, which causes the robot to shut down and "die". So the military is building "autonomy" into the robots: if they lose contact, they start to make their own decisions, in line with a pre-determined code.
This is "one of the most fundamental changes in the history of human warfare," according to PW Singer, a former analyst for the Pentagon and the CIA, in his must-read book, Wired For War: The Robotics Revolution and Defence in the Twenty-First Century (http://www.amazon.com/gp/product/B002HOQ916?ie=UTF8&tag=commondreams-20&linkCode=xm2&camp=1789&creativeASIN=B002HOQ916). Humans have been developing weapons that enabled us to kill at ever-greater distances and in ever-greater numbers for millennia, from the longbow to the cannon to the machine-gun to the nuclear bomb. But these robots mark a different stage.
The earlier technologies made it possible for humans to decide to kill in more "sophisticated" ways - but once you programme and unleash an autonomous robot, the war isn't fought by you any more: it's fought by the machine. The subject of warfare shifts.
The military claim this is a safer model of combat. Gordon Johnson of the Pentagon's Joint Forces Command says of the warbots: "They're not afraid. They don't forget their orders. They don't care if the guy next to them has been shot. Will they do a better job than humans? Yes." Why take a risk with your soldier's life, if he can stay in Arlington and kill in Kandahar? Think of it as War 4.0.
But the evidence punctures this techno-optimism. We know the programming of robots will regularly go wrong - because all technological programming regularly goes wrong. Look at the place where robots are used most frequently today: factories. Some 4 per cent of US factories have "major robotics accidents" every year - a man having molten aluminium poured over him, or a woman picked up and placed on a conveyor belt to be smashed into the shape of a car. The former Japanese Prime Minister Junichiro Koizumi was nearly killed a few years ago after a robot attacked him on a tour of a factory. And remember: these are robots that aren't designed to kill.
Think about how maddening it is to deal with a robot on the telephone when you want to pay your phone bill. Now imagine that robot had a machine-gun pointed at your chest.
Robots find it almost impossible to distinguish an apple from a tomato: how will they distinguish a combatant from a civilian? You can't appeal to a robot for mercy; you can't activate its empathy. And afterwards, who do you punish? Marc Garlasco, of Human Rights Watch, says: "War crimes need a violation and an intent. A machine has no capacity to want to kill civilians.... If they are incapable of intent, are they incapable of war crimes?"
Robots do make war much easier - for the aggressor. You are taking much less physical risk with your people, even as you kill more of theirs. One US report recently claimed they will turn war into "an essentially frictionless engineering exercise". As Larry Korb, Ronald Reagan's assistant secretary of defence, put it: "It will make people think, 'Gee, warfare is easy.'"
If virtually no American forces had died in Vietnam, would the war have stopped when it did - or would the systematic slaughter of the Vietnamese people have continued for many more years? If "we" weren't losing anyone in Afghanistan or Iraq, would the call for an end to the killing be as loud? I'd like to think we are motivated primarily by compassion for civilians on the other side, but I doubt it. Take "us" safely out of the picture and we will be more willing to kill "them".
There is some evidence that warbots will also make us less inhibited in our killing. When another human being is standing in front of you, when you can stare into their eyes, it's hard to kill them. When they are half the world away and little more than an avatar, it's easy. A young air force lieutenant who fought through a warbot told Singer: "It's like a video game [with] the ability to kill. It's like ... freaking cool."
When the US First Marine Expeditionary Force in Iraq was asked in 2006 what kind of robotic support it needed, they said they had an "urgent operational need" for a laser mounted on to an unmanned drone that could cause "instantaneous burst-combustion of insurgent clothing, a rapid death through violent trauma, and more probably a morbid combination of both". The request said it should be like "long-range blow torches or precision flame-throwers". They wanted to do with robots things they would find almost unthinkable face-to-face.
While "we" will lose fewer people at first by fighting with warbots, this way of fighting may well catalyse greater attacks on us in the long run. US army staff sergeant Scott Smith boasts warbots create "an almost helpless feeling.... It's total shock and awe." But while terror makes some people shut up, it makes many more furious and determined to strike back.
Imagine if the beaches at Dover and the skies over Westminster were filled with robots controlled from Torah Borah, or Beijing, and could shoot us at any time. Some would scuttle away - and many would be determined to kill "their" people in revenge. The Lebanese editor Rami Khouri says that when Lebanon was bombarded by largely unmanned Israeli drones in 2006, it only "enhanced the spirit of defiance" and made more people back Hezbollah.
Is this a rational way to harness our genius for science and spend tens of billions of pounds? The scientists who were essential to developing the nuclear bomb - including Albert Einstein, Robert Oppenheimer, and Andrei Sakharov - turned on their own creations in horror and begged for them to be outlawed. Some distinguished robotics scientists, like Illah Nourbakhsh, are getting in early, and saying the development of autonomous military robots should be outlawed now.
There are some technologies that are so abhorrent to human beings that we forbid them outright. We have banned war-lasers that permanently blind people along with poison gas. The conveyor belt dragging us ever closer to a world of robot wars can be stopped - if we choose to.
All this money and all this effort can be directed towards saving life, not ever-madder ways of taking it. But we have to decide to do it. We have to make the choice to look the warbot in the eye and say, firmly and forever, "Hasta la vista, baby."
© 2010 The Independent
Johann Hari is a columnist for the London Independent (http://www.independent.co.uk/). He has reported from Iraq, Israel/Palestine, the Congo, the Central African Republic, Venezuela, Peru and the US, and his journalism has appeared in publications all over the world.

Ed Jewett
01-22-2010, 07:58 PM
Thanks, Keith. That piece comes to life when you consider the next post.

Ed Jewett
01-22-2010, 07:59 PM
Israeli “Auto Kill Zone” Towers Locked and Loaded

By Noah Shachtman (http://www.wired.com/dangerroom/author/noah-shachtman/) http://www.wired.com/dangerroom/wp-content/themes/wired/images/envelope.gif
December 5, 2008 |
10:00 am |
Categories: Crime and Homeland Security (http://www.wired.com/dangerroom/category/crime-and-homeland-security/), Israel (http://www.wired.com/dangerroom/category/israel/), Weapons and Ammo (http://www.wired.com/dangerroom/category/weapons-and-ammo/)

http://www.wired.com/images_blogs/dangerroom/images/2008/12/04/e2ca3bc6713642d99f3d33082d3d7072lar.jpg (http://www.wired.com/images_blogs/photos/uncategorized/2008/12/04/e2ca3bc6713642d99f3d33082d3d7072lar.jpg)
On the U.S.-Mexico border, the American government has been trying, with limited success, to set up a string of sensor-laden sentry towers (http://blog.wired.com/defense/2008/05/border-fence.html), which would watch out for illicit incursions. In Israel, they’ve got their own set of border towers. But the Sabras’ model comes with automatic guns, operated from afar (http://www.aviationweek.com/aw/blogs/defense/index.jsp?plckController=Blog&plckScript=blogScript&plckElementId=blogDest&plckBlogPage=BlogViewPost&plckPostId=Blog%3a27ec4a53-dcc8-42d0-bd3a-01329aef79a7Post%3a344244b3-3fee-4dfc-be03-992bf38a6f19).
The Sentry Tech towers are basically remote weapons stations, stuck on stop of silos. "As suspected hostile targets are detected and within range of Sentry-Tech positions, the weapons are slewing toward the designated target," David Eshel (http://www.aviationweek.com/aw/community/persona/index.jsp?newspaperUserId=169938&plckUserId=169938)describes over at Ares. "As multiple stations can be operated by a single operator, one or more units can be used to engage the target, following identification and verification by the commander."
We flagged the towers last year, as the Israeli Defense Forces were setting up the systems, designed to create 1500-meter deep "automated kill zones (http://blog.wired.com/defense/2007/06/for_years_and_y.html)" along the Gaza border.
"Each unit mounts a 7.62 or 0.5" machine gun, shielded from enemy fire and the elements by an environmentally protective bulletproof canopy," Eshel explains. "In addition to the use of direct fire machine guns, observers can also employ precision guided missiles, such as Spike LR optically guided missiles and Lahat laser guided weapons."

Tags: Guns (http://www.wired.com/dangerroom/tag/guns/), Homeland Security (http://www.wired.com/dangerroom/tag/homeland-security/), Sabras (http://www.wired.com/dangerroom/tag/sabras/)

Read More http://www.wired.com/dangerroom/2008/12/israeli-auto-ki/#ixzz0dNC7C9su

Jan Klimkowski
01-22-2010, 08:03 PM
Gordon Johnson of the Pentagon's Joint Forces Command says of the warbots: "They're not afraid. They don't forget their orders. They don't care if the guy next to them has been shot. Will they do a better job than humans? Yes. Why take a risk with your soldier's life, if he can stay in Arlington and kill in Kandahar? Think of it as War 4.0."

To which I respond with some Bladerunner:

Rachael: Do you like our owl?
Deckard: Is it artificial?
Rachael: Of course it is.
Deckard: Must be expensive.
Rachael: Very. It seems you feel our work is not a benefit to the public.
Deckard: Replicants are like any other machine - they're either a benefit or a hazard. If they're a benefit, it's not my problem.

And some Arthur Koestler:

"The evolution of the brain not only overshot the needs of prehistoric man, it is the only example of evolution providing a species with an organ which it does not know how to use."

Ruben Mundaca
01-23-2010, 08:45 PM
This article is long and has the same criteria, but worth reading :


Keith Millea
01-23-2010, 09:37 PM
From Rubens link:

Unmanned systems represent the ultimate break between the public and its military. With no draft, no need for congressional approval (the last formal declaration of war was in 1941), no tax or war bonds, and now the knowledge that the Americans at risk are mainly just American machines, the already falling bars to war may well hit the ground. A leader won’t need to do the kind of consensus building that is normally required before a war, and won’t even need to unite the country behind the effort. In turn, the public truly will become the equivalent of sports fans watching war, rather than citizens sharing in its ­importance.

Yes sir sports fans,sit down have a beer and enjoy YOUR WAR.