Учебно-методическое пособие по реферированию и аннотированию текстов на английском языке семей 2010


Artificial Intelligence at Edinburgh University: a Perspective



страница4/5
Дата01.08.2018
Размер0,6 Mb.
ТипУчебно-методическое пособие
1   2   3   4   5
Artificial Intelligence at Edinburgh University: a Perspective

Jim Howe


Revised June 2007.

Artificial Intelligence (AI) is an experimental science whose goal is tounderstand the nature of intelligent thought and action. This goal is shared with anumber of longer established subjects such as Philosophy, Psychology andNeuroscience. The essential difference is that AI scientists are committed tocomputational modelling as a methodology for explicating the interpretative processeswhich underlie intelligent behaviour, that relate sensing of the environment to actionin it. Early workers in the field saw the digital computer as the best device available tosupport the many cycles of hypothesizing, modelling, simulating and testing involvedin research into these interpretative processes. They set about the task of developing aprogramming technology that would enable the use of digital computers as anexperimental tool. Over the first four decades of AI's life, a considerable amount oftime and effort was given over to the design and development of new special purposelist programming languages, tools and techniques. While the symbolic programmingapproach dominated at the outset, other approaches such as non-symbolic neural netsand genetic algorithms have featured strongly, reflecting the fact that computing ismerely a means to an end, an experimental tool, albeit a vital one.

The popular view of intelligence is that it is associated with high level problemsolving, i.e. people who can play chess, solve mathematical problems, make complexfinancial decisions, and so on, are regarded as intelligent. What we know now is thatintelligence is like an iceberg. A small amount of processing activity relates to highlevel problem solving, that is the part that we can reason about and introspect, butmuch of it is devoted to our interaction with the physical environment. Here we aredealing with information from a range of senses, visual, auditory and tactile, andcoupling sensing to action, including the use of language, in an appropriate reactivefashion which is not accessible to reasoning and introspection. Using the termssymbolic and sub-symbolic to distinguish these different processing regimes, in theearly decades of our work in Edinburgh we subscribed heavily to the view that tomake progress towards our goal we would need to understand the nature of theprocessing at both levels and the relationships between them. For example, some ofour work focused primarily on symbolic level tasks, in particular, our work onautomated reasoning, expert systems and planning and scheduling systems, someaspects of our work on natural language processing, and some aspects of machinevision, such as object recognition, whereas other work dealt primarily with tasks at thesub-symbolic level, including automated assembly of objects from parts, mobilerobots, and machine vision for navigation.

Much of AI's accumulating know-how resulted from work at the symbolic level,modelling mechanisms for performing complex cognitive tasks in restricted domains,for example, diagnosing faults, extracting meaning from utterances, recognisingobjects in cluttered scenes. But this know-how had value beyond its contribution tothe achievement of AI's scientific goal. It could be packaged and made available foruse in the work place. This became apparent in the late 1970s and led to an upsurge ofinterest in applied AI. In the UK, the term Knowledge Based Systems (KBS) wascoined for work which integrated AI know-how, methods and techniques with know-how, methods and techniques from other disciplines such as Computer Science andEngineering. This led to the construction of practical applications that replicatedexpert level decision making or human problem solving, making it more readilyavailable to technical and professional staff in organisations. Today, AI/KBStechnology has migrated into a plethora of products of industry and commerce, mostlyunbeknown to the users.



History of AI at Edinburgh

The Department of Artificial Intelligence can trace its origins to a smallresearch group established in a flat at 4 Hope Park Square in 1963 by Donald Michie,then Reader in Surgical Science. During the Second World War, through hismembership of Max Newman's code-breaking group at Bletchley Park, Michie hadbeen introduced to computing and had come to believe in the possibility of buildingmachines that could think and learn. By the early 1960s, the time appeared to be ripeto embark on this endeavour. Looking back, there are four discernible periods in thedevelopment of AI at Edinburgh, each of roughly ten years' duration. The first coversthe period from 1963 to the publication of the Lighthill Report by the ScienceResearch Council in l973. During this period, Artificial Intelligence was recognizedby the University, first by establishing the Experimental Programming Unit in January1965 with Michie as Director, and then by the creation of the Department of MachineIntelligence and Perception in October 1966. By then Michie had persuaded RichardGregory and Christopher Longuet-Higgins, then at Cambridge University andplanning to set up a brain research institute, to join forces with him at Edinburgh.



Michie's prime interest lay in the elucidation of design principles for the construction of intelligent robots, whereas Gregory and Longuet-Higgins recognized that computationalmodelling of cognitive processes by machine might offer newtheoretical insights into their nature. Indeed, Longuet-Higgins named his researchgroup the Theoretical Section and Gregory called his the Bionics ResearchLaboratory. During this period there were remarkable achievements in a number ofsub-areas of the discipline, including the development of new computational tools andtechniques and their application to problems in such areas as assembly robotics andnatural language. The POP-2 symbolic programming language which supported muchsubsequent UK research and teaching in AI was designed and developed by RobinPopplestone and Rod Burstall. It ran on a multi-access interactive computing system,only the second of its kind to be opened in the UK. By 1973, the research in roboticshad produced the FREDDY II robot which was capable of assembling objectsautomatically from a heap of parts. Unfortunately, from the outset of theircollaboration these scientific achievements were marred by significant intellectualdisagreements about the nature and aims of research in AI and growing disharmonybetween the founding members of the Department. When Gregory resigned in 1970 to go to Bristol University, the University's reaction was to transform the Departmentinto the School of Artificial Intelligence which was to be run by a SteeringCommittee. Its three research groups (Jim Howe had taken over responsibility forleading Gregory's group when he left) were given departmental status; the BionicsResearch Laboratory's name was retained, whereas the Experimental ProgrammingUnit became the Department of Machine Intelligence, and (much to the disgust ofsome local psychologists) the Theoretical Section was renamed the TheoreticalPsychology Unit! At that time, the Faculty's Metamathematics Unit, which had beenset up by Bernard Meltzer to pursue research in automated reasoning, joined theSchool as the Department of Computational Logic. Unfortunately, the high level ofdiscord between the senior members of the School had become known to its mainsponsors, the Science Research Council. Its reaction was to invite Sir James Lighthillto review the field. His report was published early in 1973. Although it supported AIresearch related to automation and to computer simulation of neurophysiological andpsychological processes, it was highly critical of basic research in foundational areassuch as robotics and language processing. Lighthill's report provoked a massive lossof confidence in AI by the academic establishment in the UK (and to a lesser extent inthe US). It persisted for a decade - the so-called "AI Winter". Since the new School structure had failed to reduce tensions between seniorstaff, the second ten year period began with an internal review of AI by a Committeeappointed by the University Court. Under the chairmanship of Professor NormanFeather, it consulted widely, both inside and outside the University. Reporting in1974, it recommended the retention of a research activity in AI but proposedsignificant organizational changes. The School structure was scrapped in favour of asingle department, now named the Department of Artificial Intelligence; a separateunit, the Machine Intelligence Research Unit, was set up to accommodate Michie'swork, and Longuet-Higgins opted to leave Edinburgh for Sussex University. The newDepartment's first head was Meltzer who retired in 1977 and was replaced by Howewho led it until 1996. Over the next decade, the Department's research was dominatedby work on automated reasoning, cognitive modelling, children's learning andcomputation theory (until 1979 when Rod Burstall and Gordon Plotkin left to join theTheory Group in Computer Science). Some outstanding achievements included thedesign and development of the Edinburgh Prolog programming language by DavidWarren which strongly influenced the Japanese Government's Fifth Generation

Computing Project in the 1980s, Alan Bundy's demonstrations of the utility of meta-

level reasoning to control the search for solutions to maths problems, and Howe's

successful development of computer based learning environments for a range of

primary and secondary school subjects, working with both normal and handicapped

children.

Unlike its antecedents which only undertook teaching at Masters and Ph.D.

levels, the new Department had committed itself to becoming more closely integrated

with the other departments in the Faculty by contributing to undergraduate teaching as

well. Its first course, AI2, a computational modelling course, was launched in

1974/75. This was followed by an introductory course, AI1, in 1978/79. By 1982, it

was able to launch its first joint degree, Linguistics with Artificial Intelligence. There

were no blueprints for these courses: in each case, the syllabuses had to be carved out

of the body of research. It was during this period that the Department also agreed to

join forces with the School of Epistemics, directed by Barry Richards, to help it

introduce a Ph.D. programme in Cognitive Science. The Department provided

financial support in the form of part-time seconded academic staff and studentship

funding; it also provided access to its interactive computing facilities. From this

modest beginning there emerged the Centre for Cognitive Science which was given

departmental status by the University in 1985.

The third period of AI activity at Edinburgh begins with the launch of the Alvey

54

Programme in advanced information technology in 1983. Thanks to the increasing



number of successful applications of AI technology to practical tasks, in particular

expert systems, the negative impact of the Lighthill Report had dissipated. Now, AI

was seen as a key information technology to be fostered through collaborative projects

between UK companies and UK universities. The effects on the Department were

significant. By taking full advantage of various funding initiatives provoked by the

Alveyprogramme, its academic staff complement increased rapidly from 4 to 15. The

accompanying growth in research activity was focused in four areas, Intelligent

Robotics, Knowledge Based Systems, Mathematical Reasoning and Natural Language

Processing. During the period, the Intelligent Robotics Group undertook collaborative

projects in automated assembly, unmanned vehicles and machine vision. It proposed a

novel hybrid architecture for the hierarchical control of reactive robotic devices, and

applied it successfully to industrial assembly tasks using a low cost manipulator. In

vision, work focused on 3-D geometric object representation, including methods for

extracting such information from range data. Achievements included a working range

sensor and range data segmentation package. Research in Knowledge Based Systems

included design support systems, intelligent front ends and learning environment. The

Edinburgh Designer System, a design support environment for mechanical engineers

started under Alvey funding, was successfully generalised to small molecule drug

design. The Mathematical Reasoning Group prosecuted its research into the design of

powerful inference techniques, in particular the development of proof plans for

describing and guiding inductive proofs, with applications to problems of program

verification, synthesis and transformation, as well as in areas outside Mathematics

such as computer configuration and playing bridge. Research in Natural Language

Processing spanned projects in the sub-areas of natural language interpretation and

generation. Collaborative projects included the implementation of an English language

front end to an intelligent planning system, an investigation of the use of language

generation techniques in hypertext-based documentation systems to produce output

tailored to the user's skills and working context, and exploration of semi-automated

editorial assistance such as massaging a text into house style.

In 1984, the Department combined forces with the Department of Lingistics and

the Centre for Cognitive Science to launch the Centre for Speech Technology

Research, under the directorship of John Laver. Major funding over a five year period

was provided by the AlveyProgramme to support a project demonstrating real-time

continuous speech recognition.

By 1989, the University's reputation for research excellence in natural language

computation and cognition enabled it to secure in collaboration with a number of other

universities one of the major Research Centres which became available at that time,

namely the Human Communication Research Centre which was sponsored by ESRC.

During this third decade, the UGC/UFC started the process of assessing research

quality. In 1989, and again in 1992, the Department shared a "5" rating with the other

departments making up the University's Computing Science unit of assessment.

The Department's postgraduate teaching also expanded rapidly. A master’s

degree in Knowledge Based Systems, which offered specialist themes in Foundations

55

of AI, Expert Systems, Intelligent Robotics and Natural Language Processing, was



established in 1983, and for many years was the largest of the Faculty's taught

postgraduate courses with 40-50 graduates annually. Many of the Department's

complement of about 60 Ph.D. students were drawn from its ranks. At undergraduate

level, the most significant development was the launch, in 1987/88, of the joint degree

in Artificial Intelligence and Computer Science, with support from the UFC's

Engineering and Technology initiative. Subsequently, the modular structure of the

course material enabled the introduction of joint degrees in AI and Mathematics and

AI and Psychology. At that time, the Department also shared an "Excellent" rating

awarded by the SHEFC's quality assessment exercise for its teaching provision in the

area of Computer Studies.

The start of the fourth decade of AI activity coincided with the publication in

1993 of "Realising our Potential", the Government's new strategy for harnessing the

strengths of science and engineering to the wealth creation process. For many

departments across the UK, the transfer of technology from academia to industry and

commerce was uncharted territory. However, from a relatively early stage in the

development of AI at Edinburgh, there was strong interest in putting AI technology to

work outside the laboratory. With financial banking from ICFC, in 1969 Michie and

Howe had established a small company, called Conversational Software Ltd (CSL), to

develop and market the POP-2 symbolic programming language. Probably the first AI

spin-off company in the world, CSL's POP-2 systems supported work in UK industry

and academia for a decade or more, long after it ceased to trade. As is so often the

case with small companies, the development costs had outstripped market demand.

The next exercise in technology transfer was a more modest affair, and was concerned with broadcasting some of the computing tools developed for the Department's work with schoolchildren. In 1981 a small firm, Jessop Microelectronics, was licensed to manufacture and sell the Edinburgh Turtle, a small motorised cart that could be moved around under program control leaving a trace of its path. An excellent tool for introducing programming, spatial and mathematical concepts to young children, over 1000 were sold to UK schools (including 100 supplied to special schools under a DTI initiative). At the same time, with support from Research Machines, Peter Ross and Ken Johnson re-implemented the children's programming language, LOGO, on Research Machines microcomputers. Called RM Logo, for a decade or more it was supplied to educational establishments throughout the UK by Research Machines.

As commercial interest in IT in the early 1980s exploded into life, the Department was bombarded by requests from UK companies for various kinds of technical assistance. For a variety of reasons, not least the Department's modest size at that time, the most effective way of providing this was to set up a separate non-profit making organisation to support applications oriented R&D. In July 1983, with the agreement of the University Court, Howe launched the Artificial Intelligence Applications Institute. At the end of its first year of operations, Austin Tate succeeded Howe as Director. Its mission was to help its clients acquire know-how and skills in the construction and application of knowledge based systems technology,

56

enabling them to support their own product or service developments and so gain a competitive edge. In practice, the Institute was a technology transfer experiment: there was no blueprint, no model to specify how the transfer of AI technology could best be achieved. So, much time and effort was given over to conceiving, developing and testing a varietyof mechanisms through which knowledge and skills could be imparted to clients. A ten year snapshot of its activities revealed that it employed about twenty technical staff; it had an annual turnover just short of ?1M, and it had broken even financially from the outset. Overseas, it had major clients in Japan and the US. Its work focused on three sub-areas of knowledge-based systems, planning and scheduling systems, decision support systems and information systems.



Formally, the Department of Artificial Intelligence disappeared in 1998 when

the University conflated the three departments, Artificial Intelligence, Cognitive

Science and Computer Science, to form the new School of Informatics.

A gift of tongues

Troy Dreier

PC MAGAZINE July 2006.

Jokes about the uselessness of machine translation abound. The CentralIntelligence Agency was said to have spent millions trying to program computers totranslate Russian into English. The best it managed to do, so the tale goes, was to turnthe Famous-Russian saying "The spirit is willing but the flesh is weak" into "Thevodka is good but the meat is rotten." Sadly, this story is a myth. But machinetranslation has certainly produced its share of howlers. Since its earliest days, thesubject has suffered from exaggerated claims and impossible expectations.

Hype still exists. But Japanese researchers, perhaps spurred on by thelinguistic barrier that often seems to separate their country's scientists and techniciansfrom those in the rest of the world, have made great strides towards the goal ofreliable machine translation—and now their efforts are being imitated in the West.

Until recently, the main commercial users of translation programs have beenbig Japanese manufacturers. They rely on machine translation to produce the initialdrafts of their English manuals and sales material. (This may help to explain thebafflement many western consumers feel as they leaf through the instructions for theirvideo recorders.) The most popular program for doing this is e-j bank, which wasdesigned by Nobuaki Kamejima, a reclusive software wizard at AI Laboratories in Tokyo. Now, however, a bigger market beckons. The explosion of foreign languages(especially Japanese and German) on the Internet is turning machine translation into amainstream business. The fraction of web sites posted in English has fallen from 98%to 82% over the past three years, and the trend is still downwards. Consumer software,some of it written by non-Japanese software houses, is now becoming available tointerpret this electronic Babel to those who cannot read it.

Enigma variations

Machines for translating from one language to another were first talked aboutin the 1930s. Nothing much happened, however, until 1940 when an Americanmathematician called Warren Weaver became intrigued with the way the British hadused their pioneering Colossus computer to crack the military codes produced byGermany's Enigma encryption machines. In a memo to his employer, the RockefellerFoundation, Weaver wrote: "I have a text in front of me which is written in Russianbut I am going to pretend that it is really written in English and that it has been codedin some strange symbols. All I need to do is to strip off the code in order to retrievethe information contained in the text."

The earliest "translation engines" were all based on this direct, so-called"transformer", approach. Input sentences of the source language were transformeddirectly into output sentences of the target language, using a simple form of parsing.

The parser did a rough/analysis of the source sentence, dividing it into subject, object,verb, etc. Source words were then replaced by target words selected from a dictionary,and their order rearranged so as to comply with the rules of the target language.

It sounds simple, but it wasn't. The problem with Weaver's approach wassummarized succinctly by Yehoshua Bar-Hillel, a linguist and philosopher whowondered what kind of sense a machine would make of the sentence "The pen is inthe box" (the writing instrument is in the container) and the sentence "The box is inthe pen" (the container is in the[play]pen).

Humans resolve such ambiguities in one of two ways. Either they note thecontext of the preceding sentences or they infer the meaning in isolation by knowingcertain rules about the real world—in this case, that boxes are bigger than pens(writing instruments) but smaller than pens (play-pens) and that bigger objects cannotfit inside smaller ones. The computers available to Weaver and his immediatesuccessors could not possibly have managed that.

But modern computers, which have more processing power arid morememory, can. Their translation engines are able to adopt a less direct approach, usingwhat is called "linguistic knowledge". It is this that has allowed Mr. Kamejima toproduce e-j bank, and has also permitted NeocorTech of San Diego to come up withTsunami and Typhoon - the first Japanese-language-translation software to run on thestandard (English) version of Microsoft Windows.

Linguistic-knowledge translators have two sets of grammatical rules—onefor the source language and one for the target. They also have a lot of informationabout the idiomatic differences between the languages, to stop them making sillymistakes.

The first set of grammatical rules is used by the parser to analyze an inputsentence ("I read" The Economist "every week"). The sentence is resolved into a treethat describes the structural relationship between the sentence's components ("I"[subject], "read" (verb), "The Economist" (object) and "every week" [phrasemodifying the verb). Thus far, the process is like that of a Weaver-style transformerengine. But then things get more complex. Instead of working to a pre-arrangedformula, a generator (i.e., a parser in reverse) is brought into play to create a sentencestructure in the target language. It does so using a dictionary and a comparativegrammar—a set of rules that describes the difference between each sentencecomponent in the source language and its counterpart in the target language.


: ebook -> umm
umm -> Цель работы: Изучить особенности внешней морфологии ракообразных. Класс ракообразных
umm -> Литература по гуманитарным и социальным дисциплинам для высшей школы и средних
umm -> «Имиджелогия. Как нравиться людям»
umm -> Повреждение мениска коленного сустава
umm -> Тема Франчайзинг как форма организации собственного дела История возникновения франчайзинга Понятие франчайзинга Понятия «франчайзер»
umm -> А. М. Драгового Микляева Н. В., Микляева Ю. В


Поделитесь с Вашими друзьями:
1   2   3   4   5


База данных защищена авторским правом ©stomatologo.ru 2017
обратиться к администрации

    Главная страница