The theoretical results pertain to * individuality
validation*.
In classification problems such as * writer, face, finger print*
or * speaker identification*, the number of classes is very large
or unspecified.
To establish the inherent distinctness of the classes,
i.e., validate individuality, we transform the many class problem into
a dichotomy by using a ``distance'' between two samples of the same
class and those of two different classes.
Based on conjectures derived from experimental observations, we
present theorems comparing polychotomy in feature domain and dichotomy
in distance domain from the view point of tractability vs. accuracy.

The practical application issues include * efficient search, writer
identification* and * discovery*.
First, fast nearest-neighbor algorithms for distance measures are
given. We also discuss designing and analyzing an algorithm for *
writer identification* for a known number of writers and its
relationship to * handwritten document image indexing and
retrieval*.
Finally, we present mining a database consisting of writer data and
features obtained from a handwriting sample, statistically
representative of the US population, for feature evaluation and
to determine similarity of a specific group of people.
%Z Tue, 15 May 2001 13:12:09 GMT
%U http://www.cse.buffalo.edu/tech-reports/2001-06.ps.Z
%A Chun-Hsi Huang
%T Communication-Efficient Bulk Synchronous Parallel Algorithms
%R 2001-06
%D July 30, 2001
%I Department of Computer Science and Engineering, SUNY Buffalo
%K parallel algorithm
%Y algorithms
%X Communication has been pointed out to be the major bottleneck for the
performance of parallel algorithms. Theoretical parallel models such as
PRAM have long been questioned due to the fact that the theoretical
algorithmic efficiency
does not provide a satisfactory performance prediction when algorithms are
implemented on
commercially available parallel machines. This is mainly because these
models do not provide a reasonable scheme for measuring the communication
overhead. Recently several practical parallel models aiming at achieving
portability and scalability of parallel algorithms have been widely
discussed. Among them, the Bulk Synchronous Parallel (BSP) model has
received much attention as a bridging model for parallel computation, as it
generally better addresses practical concerns like communication and
synchronization.
The BSP model has been used in a number of application areas, primarily in
scientific computing. Yet, very little work has been done on problems
generally considered to be irregularly structured, which usually result
in highly data-dependent communication patterns and make it difficult to
achieve communication efficiency. Typical examples are fundamental problems
in graph
theory and computational geometry, which are important as a vast number of
interesting problems in many fields are defined in terms of them. Thus
practical and communication-efficient parallel algorithms for solving these
problems are important.
In this dissertation, we present scalable parallel algorithms for some
fundamental problems in graph theory and computational geometry. In
addition to the time complexity analysis, we also present some techniques
for worst-case and average-case communication complexity analyses.
Experimental studies have been performed on two different architectures in
order to demonstrate the practical relevance.
%Z Thu, 02 Aug 2001 14:52:25 GMT
%U http://www.cse.buffalo.edu/tech-reports/2001-07.prn.Z
%A LUO, Hui
%T KNOWLEDGE-BASED IMAGE UNDERSTANDING AND CLASSIFICATION SYSTEM FOR MEDICAL IMAGE DATABASES
%R 2001-07
%D July 30, 2001
%I Department of Computer Science and Engineering, SUNY Buffalo
%K Medical images databases, Knowledge-based, image understanding,
image classification, feature extraction, snake model, object-oriented
representation
%X The understanding of medical images involves the identification of
anatomical structures in images, the establishment of the
relationships among the structures and the matching them with
pre-defined anatomical models. The closer this match, the better the
image is understood. Hence, the development and use of anatomical
knowledge models are critical to the effectiveness of the system. In
this thesis, we will present a new knowledge model for medical image
content analysis based on the object-oriented paradigm. The new model
abstracts common properties from different types of medical images by
using three attributes: description, component, and semantic graph,
and specifies its actions to schedule the detection procedure,
properly deform the model shapes to match the corresponding anatomies
in images, select the best match candidates, and verify the
candidates¡¯ spatial relations with the semantic graph defined in the
model. The advantage of the new model includes: 1) It presents a
robust representation ability to represent the object variability, 2)
It provides a tighter bond between features and methods, and 3) It
supports the complex image analysis and works on both normal and
abnormal images. Based on this model, a knowledge-based medical image
classification system is developed. The system assigns each input
image with one or more most similar models by performing a two-stage
processing. The first stage, coarse shape classification, focuses on
the global shape features of objects in the processed images. The
second stage performs a detail matching between extracted image
features and model specified properties. The output match results
imply a set of models which are most possible to be existed in the
processed image. To further confirm them, an evaluation strategy is
used to reject those unreasonable results and inference the high
confidence match models. To improve the detection accuracy, we also
proposed a new region model which combines both region and edge
information into image segmentation, and reformulate the traditional
snake model and level set model with it. As expected, the reformulated
models demonstrate robust ability to deal with the ambiguity in images
and overcome the problem associated with the initialization. The
performance of the system has been tested on different types of
digital radiographs, including chest, pelvis, ankle, elbow and
etc. The experimental results suggest that the system prototype is
applicable and robust, and the accuracy of the system is nearly 70% in
our image databases.
%Z Thu, 02 Aug 2001 14:58:55 GMT
%U http://www.cse.buffalo.edu/tech-reports/2001-08.ps.Z
%A Ismail, Haythem O.
%A Shapiro, Stuart C.
%T The Cognitive Clock: A Formal Investigation of the Epistemology of Time
%R 2001-08
%D August 14, 2001
%I Department of Computer Science and Engineering, SUNY Buffalo
%K Temporal reasoning, cognitive robotics, knowledge representation
%Y I.2.4
%X We present a formal model of time and time perception. The model is
used to endow a cognitive agent with a personal sense of time. A personal
sense of time is based on a representation of the real-time present that
allows the agent to distinguish it from the past and the future, and is
continuously changing to reflect the progression of time. This is important
for reasoning about, and acting in, dynamic domains and for interacting
with other agents (humans, for example). An agent that interleaves
reasoning and acting while maintaining a sense of the present faces what
has been dubbed {\em the problem of the fleeting now}. A solution to the
problem involves, not only endowing the agent with a sense of temporal
progression, but also with a feel for how much time has passed. In this
paper, we construct a theory of time that allows just that.
%Z Mon, 20 Aug 2001 18:14:24 GMT
%U http://www.cse.buffalo.edu/tech-reports/2001-09.tr.pdf.Z
%A Rapaport, William J.
%T Holism, Conceptual-Role Semantics, and Syntactic Semantics
%R 2001-09
%D August 17, 2001
%I Department of Computer Science and Engineering, SUNY Buffalo
%K philosophy of artificial intelligence, cognitive science, SNePS, holism, semantics, knowledge representation
%X This essay continues my investigation of "syntactic semantics": the
theory that, *pace* Searle's Chinese-Room Argument, syntax *does*
suffice for semantics (in particular, for the semantics needed for a
computational cognitive theory of natural-language understanding).
Here, I argue that syntactic semantics (which is internal and
first-person) is what has been called a conceptual-role semantics: The
meaning of any expression is the role that it plays in the complete
system of expressions. Such a "narrow", conceptual-role semantics is
the appropriate sort of semantics to account (from an "internal", or
first-person perspective) for how a cognitive agent understands
language. Some have argued for the primacy of external, or "wide",
semantics, while others have argued for a two-factor analysis.
But, although two factors can be specified---one internal and
first-person, the other only specifiable in an external, third-person
way---only the internal, first-person one is needed for understanding
how someone understands. A truth-conditional semantics can still be
provided, but only from a third-person perspective.
%Z Mon, 20 Aug 2001 18:18:53 GMT
%U http://www.cse.buffalo.edu/tech-reports/2001-10.ps.Z
%A Pavan, A.
%T Average-case complexity theory and polynomial-time reductions
%R 2001-10
%D August 26, 2001
%I Department of Computer Science and Engineering, SUNY Buffalo
%K average-case complexity, reasonable distributions, distributionally-hard languages, polynomial-time reducibilities, separating NP-completeness notions
%X This thesis studies average-case complexity
theory and polynomial-time reducibilities. The issues in average-case
complexity arise primarily from Cai and Selman's extension of Levin's
definition of average polynomial time. We study polynomial-time
reductions between distributional problems. Under strong but
reasonable hypotheses we separate ordinary $\NP$-completeness notions.
The average-case complexity of a problem is, in many cases, a more
significant measure than the worst-case complexity.
It is known that some $\NP$-complete problems, such as $k$-coloring and
Hamiltonian cycle, can be solved quickly on average.
This has motivated
the development of the study of average-case analysis of algorithms.
Levin~\cite{Levin86} initiated a general study of average-case complexity by
defining a robust notion of average polynomial time and the notion of
distributional $\NP$-completeness.
Cai and Selman ~\cite{CaiSelman99} gave a general definition of {\em $T$ on aver
age} for arbitrary
time bounds $T$. They observed difficulties with Levin's definition of
average polynomial time when applied to unreasonable input distributions.
Their definition of average polynomial time slightly modifies Levin's
definition to avoid these difficulties.
In this thesis we study reasonable distributions,
distributionally-hard languages, and reductions among distributional
problems.
We prove several results demonstrating that it suffices to restrict
one's attention to reasonable distributions.
This is important because both Levin's definition and
Cai and Selman's definition yield unsatisfactory consequences when we
permit unreasonable distributions. We show that if $\NP$ has a
$\DTIME(2^n)$-bi-immune language, then every $\DistNP$-complete problem must
have a
reasonable distribution. We prove that the class $\ppcomp$, the set of languages
that can be decided in average polynomial time with respect to every
polynomial-time computable distribution, remains
unchanged when restricted to reasonable distributions. We strengthen and
present a simpler proof of a
result of Belanger and Wang~\cite{BelangerWang96}, which shows that Cai and Selm
an's definition of
average-polynomial time is not closed under many-one reductions.
Cai and Selman showed
that every is $\P$-bi-immune language is dis\-tri\-bu\-tionally-hard. We study
the question of whether there exist distributionally-hard languages that are not
$\P$-bi-immune. First we show that such languages exist if and only if $\P$
contains $\P$-printable-immune sets. Then we extend this characterization
significantly to include assertions about several traditional
questions about immunity, about finding witnesses for $\NP$-machines, and
about the existence of one-way functions.
Next we study polynomial-time reducibilities.
Ladner, Lynch, and Selman~\cite{LadnerLynchSelman75}
showed, in the context of worst-case complexity, that various poly\-no\-mi\-al-t
ime
reductions differ in $\E$. We show similar results
for reductions between distributional problems and we show that
most of the completeness notions for $\DistEXP$ are
different.
Finally, we turn our attention to the question
of whether various notions of
$\NP$-completeness are different. Lutz and Mayordomo, under the
hypothesis that the $p$-measure of $\NP$ is not zero, and Ambos-Spies and
Bentzien, under the hypothesis that $\NP$ has a $p$-generic language,
showed that various completeness notions in $\NP$ are different.
We introduce a structural hypothesis not involving stochastic properties
and prove that the existence of a Turing complete language for $\NP$
that is not truth-table complete follows from our hypothesis.
This result is the first to provide evidence that truth-table
completeness for $\NP$ is weaker than Turing completeness for $\NP$.
%Z Tue, 28 Aug 2001 17:20:27 GMT
%U http://www.cse.buffalo.edu/tech-reports/2001-11.ps.Z
%A Ismail, Haythem O.
%T Reasoning and Acting in Time
%R 2001-11
%D August 24, 2001
%I Department of Computer Science and Engineering, SUNY Buffalo
%K Knowledge representation, cognitive robotics, temporal reasoning
%Y I.2.4
%X This dissertation investigates aspects of the design of an embodied
cognitive agent that interleaves reasoning, acting, and interacting with
other agents while maintaining a record of what has happened and is
happening in its environment. Among other things, such knowledge of its own
history allows the agent to reason more effectively when faced with an
emergency situation such as the failure of one of its acts. Crucial to
distinguishing what has happened from what is happening is a
notion of the present time, or ``now''. There has been much research in
artificial intelligence on issues of time. However, it is typically the
case that, although an agent reasons about time, it is not itself
situated {\em in} time. Once the agent has a concept of ``now'', one that
continuously changes to reflect the progression of time, a different kind
of temporal reasoning problem emerges. For example, given that an agent
could be told anything, how are formal representations of present states
to be distinguished from those of past states, where the former, but not
the latter, are pertinent to the agent's actions? And, given that the
present must be distinguished, how is ``now'' to be modeled so that the
mere passage of time does not result in computationally costly
knowledge-base-editing routines? How can the agent reason about ``now'',
when the very process of reasoning results in ``now'' changing? How can the
agent be endowed with a feel for how much time has passed, which seems
crucial for reasoning about persistence as time passes while the agent
acts?
In this dissertation, the above issues are investigated in detail.
A theory of subjective time is presented, accounting for a cognitive
agent's {\em vague} concept of ``now'', its sense of temporal progression,
and its feel for how much time has passed. An investigation of the impact
of embodiment and time perception on issues of reasoning about persistence
as the agent acts comes out as a natural by-product of the theory. The
theory of subjective time is wrapped around a core logic of objective time
that axiomatizes various aspectual phenomena needed for reasoning, acting,
and natural language interaction. Unlike most theories of linguistic
aspect, the logic of aspect presented here accounts for the agent's
knowledge of states, events, and processes, not only from static
natural language inputs, but also from the dynamic accumulation of
perceptual and proprioceptual information as events unfold in time.
Based on the logical analysis of the notion of telicity (the analysis goes
beyond the standard telic/atelic distinction), a theory of how cognitive
agents may employ reasoning to control the execution of sequences of acts
is presented. The theory proposes a principled way by which agents may
decide when it is time to move on to the next step in a sequence; an issue
that has not been given much attention in the literature. Finally, the
dissertation establishes a framework for interrupt-handling and error
recovery based on a system of context-sensitive priorities among acts.
%Z Wed, 29 Aug 2001 17:12:17 GMT
%U http://www.cse.buffalo.edu/tech-reports/2001-12.pdf.Z
%A Song, Yuqing and Zhang, Aidong
%T Monotonic Tree of Images and Its Application in Image Processing
%R 2001-12
%D August 29, 2001
%I Department of Computer Science and Engineering, SUNY Buffalo
%K Image retrieval
%Y Image processing
%X Contour trees have been used in geographic information systems (GIS)
and medical imaging to display scalar data. Contours are only defined for
continuous functions. For an image represented by discrete data,
a continuous function is first defined as an interpolation of the data.
Then the contour tree is defined on this continuous function.
In this paper, we introduce a new concept termed monotonic line,
which is directly defined on discrete data.
All monotonic lines in an image form a tree, called monotonic tree.
As compared with contour trees, monotonic trees avoid the step of
interpolation, thus can be computed more efficiently.
Monotonic tree can be reduced.
The reduced monotonic tree can also be used as a hierarchical
representation of image structures in image processing.
In particular, we demonstrate its application
on image smoothing and texture retrieval.
Experiments show that our smoothing scheme is
successful in both noise reduction and texture retrieval.
%Z Tue, 04 Sep 2001 14:36:20 GMT
%U http://www.cse.buffalo.edu/tech-reports/2001-13.pdf.Z
%A Qiao, Chunming
%A Xu, Dahai
%T Distributed Partial Information Management (DPIM) Schemes for Survivable Networks - Part I
%R 2001-13
%D July 10, 2001
%I Department of Computer Science and Engineering, SUNY Buffalo
%K distributed routing and signaling, bandwidth guarantee, sharing, allocation and deallocation,protection
%Y Algorithms
%X This paper describes a novel framework,
called Distributed Partial Information
Management (or DPIM). It addresses several major challenges
in achieving efficient shared path protection under
distributed control
with only partial information, including
(1) how much partial information about existing active and backup paths
(or APs and BPs respectively) is maintained and exchanged;
(2) how to obtain a good estimate of the bandwidth needed by a candidate BP,
called BBW, and subsequently select a pair of AP and BP for a
connection establishment request so as to minimize total bandwidth consumption
and/or maximize revenues;
(3) how to distributively allocate minimal BBW (and de-allocate maximal BBW) via distributed signaling; and
(4) how to update and subsequently exchange the partial information.
A DPIM-based scheme using Integer Linear Programming
is described to illustrate our approach.
In addition, an ultra-fast and efficient heuristic scheme
is described. With about the same amount of partial information,
such a heuristic-based DPIM scheme can achieve almost
as a good performance as the ILP-based DPIM scheme,
and a much better performance than another ILP-based scheme described in
[1].
The paper also presents an elegant method to support
dynamic requests for protected, unprotected, and pre-emptable
connections in the unified DPIM framework.
%Z Tue, 02 Oct 2001 01:22:13 GMT
%U http://www.cse.buffalo.edu/tech-reports/2001-14.pdf.Z
%A Xu, Dahai
%A Qiao, Chunming
%T Distributed Partial Information Management (DPIM) Schemes for Survivable Networks - Part II
%R 2001-14
%D July 10, 2001
%I Department of Computer Science and Engineering, SUNY Buffalo
%K bandwidth sharing, distributed routing, on-line heuristic, potential cost
%Y Algorithms
%X A major challenge in shared path protection is to select
a jointly optimized pair of active and backup paths.
While Integer Linear Programming (ILP) based approaches are
notoriously time consuming, existing heuristics such as active path first
(APF) can only achieve sub-optimal results.
In this paper, we propose a novel heuristic called
APF with potential backup cost or APF-PBC under the distributed partial
information management framework described in [1]
for ultra-fast processing of on-line requests.
We show that APF-PBC can outperform
existing schemes based on Integer Linear Programming (ILP)
with exactly the same input (either partial or complete information) and
constraint.
We also show how an intuitive function to compute the potential backup cost
is derived mathematically based on
the statistical analysis of experimental data.
%Z Tue, 02 Oct 2001 01:23:49 GMT
%U http://www.cse.buffalo.edu/tech-reports/2001-15.ps.Z
%A Jayaraman, B.
%A Tambay, P.
%T Semantics and Applications of Constrained Objects
%R 2001-15
%D October 12, 2001
%I Department of Computer Science and Engineering, SUNY Buffalo
%K Constrained objects, Modeling, Object-oriented, Constraint Logic Programming, Semantics
%Y Languages
%X This paper describes a compositional approach to modeling complex systems.
The building block, called a constrained object, is an object whose internal
state is governed by a set of (declarative) constraints. Especially in the
engineering domain, a constrained object models in a natural manner the internal
state and invariants of an engineering artifact, e.g., a resistor in
a circuit, a joint in a truss, a gear in a gear box, a heat exchanger
in a chemical process plant, etc. When several constrained objects are
aggregated to form a complex object, their internal states might further
have to satisfy interface constraints. The resulting behavior of the
system is obtained by a process of logical inference and constraint satisfaction.
Our work extends previous research by providing a richer modeling language,
with quantified and conditional constraints, as well as preferences.
We show that the paradigm of constrained objects is superior to
both a pure object-oriented language as well as a pure constraint language.
We present several examples in this paper, and also describe the formal
declarative and operational semantics of our language. The declarative
semantics for constrained objects is given in terms of a set-theoetic
approach, and the operational semantics makes use of top-down goal
reduction, constraint solving, and pruning according to preferences.
%Z Wed, 21 Nov 2001 14:48:59 GMT
%U http://www.cse.buffalo.edu/tech-reports/2001-16.ps.Z
%A Mahapatra, Nihar R.
%A Dutt, Shantanu
%T An Efficient Delay-Optimal Distributed Termination Detection Algorithm
%R 2001-16
%D November 27, 2001
%I Department of Computer Science and Engineering, SUNY Buffalo
%K Broadcast, detection delay, distributed computation, k-ary n-cubes, message complexity, message passing, termination detection.
%Y F.2 ANALYSIS OF ALGORITHMS AND PROBLEM COMPLEXITY
%X One of the important issues to be addressed when solving problems on
parallel machines or distributed systems is that of efficient
termination detection. Numerous schemes with different performance
characteristics have been proposed in the past for this purpose. These
schemes, while being efficient with regard to one performance metric,
prove to be inefficient in terms of other metrics. A significant
drawback shared by all previous methods is that they may take as long
as $\Theta(P)$ time to detect and signal termination after its actual
occurrence, where $P$ is the total number of processing elements.
Detection delay is arguably the most important metric to optimize,
since it is directly related to the amount of idling of computing
resources and to the delay in the utilization of results of the
underlying computation. In this paper, we present a novel termination
detection algorithm that is simultaneously optimal or near-optimal with
respect to all relevant performance measures on any topology. In
particular, our algorithm has a best-case detection delay of
$\Theta(1)$ and a finite optimal worst-case detection delay on any
topology equal in order terms to the time for an optimal one-to-all
broadcast on that topology---we derive a general expression for an
optimal one-to-all broadcast on an arbitrary topology, which is an
interesting new result in itself. On $k$-ary $n$-cube tori and meshes,
the worst-case delay is $\Theta(D)$, where $D$ is the diameter of the
architecture. Further, our algorithm has message and computational
complexities of $O(\max(MD,P))$ ($\Theta(\max(M,P))$ on the average for
most applications---the same as other message-efficient algorithms) and
an optimal space complexity of $\Theta(P)$, where $M$ is the total
number of messages used by the underlying computation. We also give a
scheme using counters that greatly reduces the constant associated with
the average message and computational complexities, but does not suffer
from the counter-overflow problems of other schemes. Finally, unlike
some previous schemes, our algorithm does not rely on FIFO ordering for
message communication to work correctly.
%Z Wed, 05 Dec 2001 15:41:47 GMT
%U http://www.cse.buffalo.edu/tech-reports/2002-01.ps.Z
%A Anand, Vishal
%A Chauhan, Sunit
%A Qiao, Chunming.
%T Sub-path Protection: A New Framework for Optical Layer Survivability and its Quantitative Evaluation
%R 2002-01
%D January 02, 2002
%I Department of Computer Science and Engineering, SUNY Buffalo
%K WDM networks, lightpath, wavelength routing, wavelength assignment, protection, restoration, path protection, link protection, sub-path protection, time to recover.
%X In wavelength division multiplexed networks (WDM) with 1:1 path
protection, a link-disjoint protection (backup) path is also set up at
the time of setting up a working (primary) path. Hence, the failure of
a single fiber-link does not cause huge data losses.
Typically, past research has considered two survivability paradigms:
(a) path protection/restoration and (b) link protection/restoration.
In path based schemes a backup path (which is link-disjoint with the primary path)
and wavelength is reserved during primary path setup, while in link based schemes,
for every link along the primary path a link disjoint backup for that link is found and reserved.
Each of these schemes have their advanatges and disadvantages in terms of capacity
utilization and restoration times.
In our study here, we introduce a new protection/restoration scheme called Sub-path protection. We show that this scheme has advantages over the traditional link and path based schemes. Also sub-path based schemes have the added advantge that they can survive multiple-link failures. Experimental results show that sub-path protection has interesting trade-off in terms of capacity utilization and recovery times. A network provider can take advantage of such a scheme to optimize his network carrying capacity and survivability.
%Z Wed, 02 Jan 2002 16:09:56 GMT
%U http://www.cse.buffalo.edu/tech-reports/2002-02.ps.Z
%A Ngo, Hung Q.
%T P-Species and the q-Mehler Formula
%R 2002-02
%D January 24, 2002
%I Department of Computer Science and Engineering, SUNY Buffalo
%K bijective proof, q-Mehler formula, species, Kibble-Slepian formula
%X In this paper, we present a bijective proof of the $q$-Mehler formula.
The proof is in the same style as Foata's proof of the Mehler formula.
Since Foata's proof was extended to show the Kibble-Slepian formula,
a very general multilinear extension of the Mehler formula,
we hope that the proof provided in this paper helps
find some multilinear extension of the $q$-Mehler formula.
The basic idea to obtain this proof comes from generalizing a result by
Gessel. The generalization leads to the notion of species on permutations and
the $q$-generating series for these species. The bijective proof is then
obtained by applying this new exponential formula to a certain type of
species on permutations and a weight preserving bijection relating this
species to the $q$-Mehler formula.
Some by-products of the $q$-exponential formula shall also be derived.
%Z Thu, 24 Jan 2002 19:00:36 GMT
%U http://www.cse.buffalo.edu/tech-reports/2002-03.ps.Z
%A Burhans, Debra T.
%T A Question Answering Interpretation of Resolution Refutation
%R 2002-03
%D January 31, 2002
%I Department of Computer Science and Engineering, SUNY Buffalo
%K theorem proving, question answering
%X Early use of theorem provers for question answering identified the
presence of an answer with the presence of a proof. Later work
recognized other types of answers whose existence was not associated with
proof, including intensional and conditional answers. This work is concerned
with the problem of defining what is meant by answer in the context of
resolution theorem proving. Clauses that are relevant are all identified as
answers, where relevance is determined with respect to a question and knowledge
base: any clause descended from the negated question is deemed relevant. This
definition of relevance is not in and of itself novel; rather, it is the way
in which the set of relevant clauses is partitioned that provides the key
to interpreting clauses as answers. A partition that divides the set of
relevant clauses into three classes, termed specific, generic, and
hypothetical, is proposed. These classes are formally distinguished by
the way in which literals in a clause share variables, with class membership
based on a property termed the closure of variable sharing of a literal. The
best answers are identified with relevant clauses that are, additionally,
informative. The informativeness of a clause is determined with respect to a
set of clauses termed the answer set, where an informative clause
contains information not conveyed by any other clause in the set. The
process of reasoning in search of a proof is recast as the process of
constructing a sequence of answer sets, starting with the empty set,
where each set in the sequence is at least as informative as the
previous set. It is this property that identifies question answering
as an anytime reasoning process. The complete answer set for a given
question and knowledge base is shown to be the fixed point of the
answer set generation function. While this potentially infinite set
may never be realized by a reasoner, the current answer set is
guaranteed to contain the most informative, relevant clauses
available at any point during reasoning. Different control
structures give rise to different answer set sequences, and may be
selectively employed to generate preferred answers as quickly as
possible. The definition of answer sets elucidates the importance of
separating answer search from answer selection, and highlights the
subjectivity of the notion of best answer as defined in many question
answering systems. The complete characterization of resolvents as
answers presented herein provides a framework that accounts for and
expands upon previous work on question answering in a resolution
theorem prover. In addition, it provides a foundation for further
work in question answering by establishing a context-independent
logical pragmatics of question answering.
%Z Thu, 07 Feb 2002 13:46:31 GMT
%U http://www.cse.buffalo.edu/tech-reports/2002-04.ps.Z
%A Garg, Ashim
%A Rusu, Adrian
%T Straight-line Drawings of Binary Trees with Linear Area and Good Aspect Ratio
%R 2002-04
%D April 24, 2002
%I Department of Computer Science and Engineering, SUNY Buffalo
%K tree drawing straight-line grid planar aspect ratio area
%X Trees are usually drawn planar, i.e. without any crossings.In this paper,
we investigate the area requirement of (non-upward) planar straight-line
drawings of binary trees. Let $T$ be a binary tree with $n$ vertices.
We show that $T$ admits a planar straight-line grid drawing with area
O(n) and with any prespecified aspect ratio in the range [1,n^\alpha],
where \alpha is a constant such that 0 <= \alpha < 1. We also show that
such a drawing can be constructed in O(nlog n) time.
%Z Wed, 24 Apr 2002 19:34:51 GMT
%U http://www.cse.buffalo.edu/tech-reports/2002-05.ps.Z
%A Garg, Ashim
%A Chanda, Amrita
%T Compact Encodings of Planar Orthogonal Drawings
%R 2002-05
%D April 24, 2002
%I Department of Computer Science and Engineering, SUNY Buffalo
%K encoding graph drawing orthogonal planar
%X We present time-efficient algorithms for encoding (and decoding) planar
orthogonal drawings of degree-4 and degree-3 biconnected and triconnected
planar graphs using small number of bits. We also present time-efficient
algorithms for encoding (and decoding) turn-monotone planar orthogonal
drawings.
%Z Wed, 24 Apr 2002 19:34:55 GMT
%U http://www.cse.buffalo.edu/tech-reports/2002-08.pdf.Z
%A Wu, Hongyi
%T iCAR : an Integrated Cellular and Ad hoc Relaying System
%R 2002-08
%D May 16, 2002
%I Department of Computer Science and Engineering, SUNY Buffalo
%K iCAR, cellular, ad hoc, wireless, mobile, network, integrated
%X The cellular concept was introduced for wireless communication to
address the problem of having scarce frequency resource. It is
based on the sub-division of geographical area to be covered by
the network into a number of smaller areas called cells. Frequency
reuse in the cells far away from each other increases system's
capacity. But at the same time, the cell boundaries prevent the
channel resource of a system to be fully available for users. No
access to Data Channels (or DCHs) in other cell by the
mobile host (or MH) limits the channel efficiency and
consequently the system capacity.
In this dissertation, we propose a new wireless system
architecture based on the integration of cellular and modern ad
hoc relaying technologies, called iCAR. It can efficiently balance
traffic loads and share channel resource between cells by using
Ad hoc relaying stations (ARSs) to relay traffic from one
cell to another dynamically. This not only increases the system's
capacity cost-effectively, but also reduces transmission power for
mobile hosts and extends system coverage. We analyze the system
performance in terms of the call blocking probability and queuing
delay using multi-dimensional Markov chains for the new call
requests and the call dropping probability for handoff requests,
and verify the analytical results via simulations. Our results
show that with a limited number of ARSs and some increase in the
signaling overhead (as well as hardware complexity), the call
blocking/dropping probability in a congested cell as well as the
overall system can be reduced. We also propose a seed-growing
approach for ARS placement, and discuss the upper bound on the
number of seed ARSs needed in the system. In order to
quantitatively evaluate ARS placement strategies, we introduce the
concept of a new performance metric called quality of (ARS)
coverage (QoC) for the comparison of various ARS placement
strategies, and propose three rules of thumb as guidelines for
cost-effective ARS placement in iCAR. Furthermore, we propose the
signaling and routing protocols for establishing QoS guaranteed
connections for IP traffic in iCAR. In particular, we discuss how
a relaying route between a MH and a BTS in a nearby cell can be
established via ARSs, and evaluate the performance of the
protocols in terms of request rejection rate and signaling
overhead through simulations. In addition, we propose a novel
concept called ``managed mobility" and address the ARS mobility
management in iCAR.
%Z Fri, 17 May 2002 14:51:59 GMT
%U http://www.cse.buffalo.edu/tech-reports/2002-07.ps.Z
%A Ngo, Hung Q.
%A Vu, Van H.
%T On Multi-rate Rearrangeable Clos Networks and a Generalized Edge Coloring Problem on Bipartite Graphs
%R 2002-07
%D May 10, 2002
%I Department of Computer Science and Engineering, SUNY Buffalo
%K Multirate Rearrangeability, Clos Networks, Bipartite Graph Edge Coloring
%X Chung and Ross (SIAM J. Comput., \textbf{20}, 1991)
conjectured that the minimum number
$m(n,r)$ of middle-state switches for the symmetric $3$-stage Clos network
$C(n,m(n,r),r)$ to be rearrangeable in the multirate enviroment is
at most $2n-1$.
This problem is equivalent to a generalized version of the bipartite
graph edge coloring problem.
The best bounds known so far on this function $m(n,r)$ is
$11n/9 \leq m(n,r) \leq 41n/16 + O(1)$, for $n, r \geq 2$, derived
by Du-Gao-Hwang-Kim (SIAM J. Comput., \textbf{28}, 1999).
In this paper, we make several contributions. Firstly, we give evidence
to show that even a stronger result might hold.
In particular, we show that $m(n,r) \leq \lceil (r+1)n/2 \rceil$, which implies
$m(n,2) \leq \lceil 3n/2 \rceil$ -
stronger than the conjectured value of $2n-1$.
Secondly, we derive that $m(2,r) = 3$ by an elegant argument.
Lastly, we improve both the best upper and lower bounds given above:
$\lceil 5n/4 \rceil \leq m(n,r) \leq 2n-1+\lceil (r-1)/2 \rceil$,
where the upper bound is an improvement
over $41n/16$ when $r$ is relatively small compared to $n$.
We also conjecture that $m(n,r) \leq \lfloor 2n(1-1/2^r) \rfloor$.
%Z Fri, 24 May 2002 17:54:44 GMT
%U http://www.cse.buffalo.edu/tech-reports/2002-09.ps.Z
%A Ngo, Hung Q.
%T A New Routing Algorithm for Multirate Rearrangeable Clos Networks
%R 2002-09
%D May 22, 2002
%I Department of Computer Science and Engineering, SUNY Buffalo
%K Multirate rearrangeability, Clos Networks, Routing Algorithms
%X In this paper, we study the problem of finding routing algorithms on the
multirate rearrangeable Clos networks which use as few number of
middle-stage switches as possible.
We propose a new routing algorithm called the ``grouping algorithm''.
This is a simple algorithm which uses
fewer middle-stage switches than all known strategies, given that the
number of input-stage switches and output-stage switches are relatively
small compared to the size of input and output switches.
In particular, the grouping algorithm implies that
$m=2n+\lceil\frac{n-1}{2^k}\rceil$ is a sufficient number of middle-stage
switches for the symmetric $3$-stage Clos network $C(n,m,r)$ to
be multirate rearrangeable, where $k$ is any positive integer
and $r \leq n/(2^k-1)$.
%Z Fri, 24 May 2002 17:55:57 GMT
%U http://www.cse.buffalo.edu/tech-reports/2002-10.ps.Z
%A Ngo, Hung Q.
%T WDM Split Cross-connects and $3$-stage Clos Networks
%R 2002-10
%D July 09, 2002
%I Department of Computer Science and Engineering, SUNY Buffalo
%K WDM split cross-connects, Clos Networks, Routing Algorithms
%X In this paper, we first show that a
wavelength division multiplexed (WDM) split
cross-connect is strictly, wide-sense, or rearrangeably non-blocking if and
only if a corresponding $3$-stage Clos network is strictly, wide-sense
or rearrangeably non-blocking. This observation implies known results
on strictly non-blocking WDM split cross-connects and gives rise to new
results on wide-sense and rearrangeably non-blocking WDM split cross-connects.
We next introduce a routing algorithm for the Clos networks which show that
there are wide-sense non-blocking Clos networks which are not strictly
non-blocking. Hence, the same holds for the WDM split cross-connects.
We also consider another demand model for WDM split cross-connects and
present associated results. Routing algorithms for WDM split cross-connects
could be viewed as on-line and off-line algorithms to edge-color a class of
bipartite multi-graphs.
%Z Tue, 16 Jul 2002 12:56:43 GMT
%U http://www.cse.buffalo.edu/tech-reports/2002-11.pdf.Z
%A Anand, Vishal
%A Qiao, Chunming
%T Effect of Wavelength Conversion in Survivable Wavelength Routed Optical WDM Networks with Alternate Routing
%R 2002-11
%D June 06, 2002
%I Department of Computer Science and Engineering, SUNY Buffalo
%K Optical network, Wavelength Division Multiplexing (WDM), Routing and Wavelength assignment (RWA), all-optical networks, wavelength conversion, network survivability, Traffic models, network blocking, alternate routing.
%X This study focuses on the routing and wavelength assignment in
wavelength-routed optical wavelength-divisioned -multiplexed networks
with circuit switching using wavelength conversion. Wavelength
conversion has been proposed for use in such networks to improve the
efficiency. The crucial factors which determine the efficiency of
using wavelength conversion as opposed to not using them, is the
number of wavelengths required to satisfy the network traffic demand
and the blocking of traffic demands by the network. In addition to
considering wavelength conversion, this study investigates the effect
of having multiple paths between each source-destination pair. We
consider the cases where protection is necessary for each of the
primary paths between every source-destination pair and hence a backup
path also has to be established during connection set up, and the case
when no protection is necessary. We study the effect of wavelength
conversion with different protection schemes. By simulating and
analyzing a large number of randomly generated networks we report
results of our study of the above schemes under both incremental and
dynamic traffic conditions. The study shows that utilizing wavelength
conversion has a considerable impact on reducing network blocking
under both incremental and dynamic traffic conditions, on the other
hand we find the difference in wavelength requirements of the various
schemes considered is minimal.
%Z Wed, 07 Aug 2002 18:19:38 GMT
%U http://www.cse.buffalo.edu/tech-reports/2002-12.ps.Z
%A Song, Yuqing
%T MONOTONIC TREE AND ITS APPLICATION TO MULTIMEDIA INFORMATION RETRIEVAL
%R 2002-12
%D August 16, 2002
%I Department of Computer Science and Engineering, SUNY Buffalo
%K monotonic tree, information retrieval, image retrieval, semantics
%X Contour trees have been used in geographic information systems (GIS) and
computer imag-ing to display scalar data. Contours are only defined for
continuous functions. For discrete data, a continuous function is first
defined as an interpolation of the data. Then a con-tour tree is defined
on this continuous function. In this dissertation, we first introduce a
new concept termed monotonic line, which models contour lines of discrete
functions. All monotonic lines in a discrete function form a tree, called
monotonic tree. As compared with contour trees, monotonic trees avoid the
step of interpolation, thus can be computed and manipulated more
efficiently. In addition, when used in image processing, monotonic trees
retrieve similar structures as contour trees do while reserving the
discreteness of image data. In computer imaging, the discreteness of
image data is one main factor which makes image processing and
understanding so difficult. The discreteness of image data itself is a
research topic.
Monotonic trees are used as a hierarchical representation of image
structures in content-based multimedia retrieval. Although a variety of
techniques have been developed for content-based image retrieval (CBIR),
automatic image retrieval by semantics still remains a challenging
problem due to the difficulty in object recognition and image understand-
ing. In this dissertation, we present a novel approach to support
semantics-based image retrieval on the basis of monotonic trees. The
structural elements of an image are modeled as branches (or subtrees) of
the monotonic tree. These elements are classified and clustered on the
basis of such properties as color, spatial location, coarseness, and
shape. Each cluster corresponds to some semantic feature. Following these
steps, images can be automatically annotated with category keywords. So
high-level (semantics-based) querying and brows-ing of images can be
supported. This scheme is applied to retrieve scenery features from
images and locate smooth background in images. Comparisons of
experimental results of this approach with conventional techniques using
low-level features demonstrate the effec-tiveness of our approach. In
future work, the monotonic tree model will be extended to general
semantic categories on both images and videos.
This dissertation has two main contributions. The first contribution is
the mathematical theory of monotonic tree, which is the first theory
addressing definitions, properties, and computations of contour
structures directly on discrete functions. The second contribution is
the application of monotonic tree model to semantics-based image retrieval.
The success of this model on scenery features and smooth background
implies its potential in analyzing general semantics of both images and
videos.
%Z Tue, 20 Aug 2002 13:57:34 GMT
%U http://www.cse.buffalo.edu/tech-reports/90-21.ps.Z
%A Shapiro, Stuart C.
%A Rapaport, William J.
%T The SNePS Family
%R 90-21
%D September 1, 1990
%I Department of Computer Science and Engineering, SUNY Buffalo
%K SNePS, semantic networks
%Y I.2 ARTIFICIAL INTELLIGENCE I.2.0 General-Cognitive simulation I.2.0 General-Philosophical foundations I.2.1 Applications and Expert Systems (see also H.4,J)-Natural language interfaces I.2.3 Deduction and Theorem Proving-Deduction (e.g., natural, rule-based) I.2.3 Deduction and Theorem Proving-Nonmonotonic reasoning and belief revision I.2.4 Knowledge Representation Formalisms and Methods I.2.4 Knowledge Representation Formalisms and Methods-Representation languages I.2.4 Knowledge Representation Formalisms and Methods-Semantic networks
%X SNePS, the Semantic Network Processing System, has been designed to be
a system for representing the beliefs of a natural language using
intelligent system (a ``cognitive agent''). It has always been the
intention that a SNePS-based
``knowledge base'' would ultimately be built, not by a programmer or
knowledge engineer entering representations of knowledge in some formal
language or data entry system, but by a human informing it using a
natural language (generally supposed to be English), or by the system
reading books or articles that had been
prepared for human readers.
This document describes the history of SNePS and some recent SNePS
projects.
It has been superseded by:
Shapiro, Stuart C., & Rapaport, William J. (1992), ``The SNePS Family,''
_Computers and Mathematics with Applications_, Vol. 23: 243-275.
Reprinted in Fritz Lehmann (ed.), _Semantic Networks in Artificial
Intelligence_ (Oxford: Pergamon Press, 1992): 243-275.
%Z Mon, 09 Sep 2002 12:52:04 GMT
%U http://www.cse.buffalo.edu/tech-reports/2002-13.ps.Z
%A Zhang, Huaming
%A He, Xin
%T On Even Triangulations of 2-Connected Embedded Graphs
%R 2002-13
%D July 06, 2002
%I Department of Computer Science and Engineering, SUNY Buffalo
%X Recently, Hoffmann and Kriegel proved an important combinatorial
theorem [HK96]: Every 2-connected bipartite plane graph $G$
has a triangulation in which all vertices have even degree
(it's called an even triangulation). Combined with a classical
Whitney's Theorem, this result implies that every such a graph
has a 3-colorable plane triangulation. Using this theorem,
Hoffmann and Kriegel significantly improved the upper bounds of
several art gallery and prison guard problems.
A complicated O(n^2) time algorithm was obtained in
[HK96] for constructing an even triangulation of G.
Hoffmann and Kriegel conjectured that there is an O(n^{3/2})
time algorithm for solving this problem.
In this paper, we develop a simple proof of the above theorem.
Our proof reveals and relies on a natural correspondence between
even triangulations of $G$ and certain orientations of
G. Based on this new proof, we obtain a very simple O(n) time
algorithm for finding an even triangulation of G. We also extend
our proof to show the existence of even triangulations for similar
graphs on high genus surface.
%Z Mon, 09 Sep 2002 12:57:43 GMT