%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/90-12.ps.Z
%A W. Rapaport
%T Cognitive Science
%R 90-12
%D May 1990
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/90-24.ps.Z
%A X. He
%T On Finding the Rectangular Duals of Planar Triangulated Graphs
%R 90-24
%D 1990
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/91-04.ps.Z
%A S. Homer
%A A. Selman
%T Complexity Theory
%R 91-04
%D June 8, 1992
%I Department of Computer Science, SUNY Buffalo
%X This is an introductory survey of complexity theory.  Topics covered
include Modes of Computation, Complexity Classes and Complexity Measures, 
Basic Results, Nondeterminism and NP-Completeness, Relative Computability,
Parallelism, Probabilistic Complexity Classes, and Structural Complexity
Theory.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/91-05.ps.Z
%A X. He
%T Efficient Parallel Algorithms for Two Graph Layout Problems
%R 91-05
%D June 1991
%I Department of Computer Science, SUNY Buffalo
%X We present efficient parallel algorithms for solving two graph layout
problems: Find a F$\acute{a}$ry Embedding on a grid and construct a
rectangular dual for planar graphs.  The algorithm for the first problem
takes $O(\log\em{n}\log^*\em{n})$ time with $O(n)$ processors on a PRAM.
The algorithm for the second problem takes $O(\log^2\em{n})$ time with
$O(n)$ processors.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/91-09.ps.Z
%A J. Haas
%A B. Jayaraman
%T Automatic Synthesis of Semantics for Context-free Grammars
%R 91-09
%D 1991
%I Department of Computer Science, SUNY Buffalo
%X We are investigating the mechanical transformation of an unambiguous
context-free grammar (CFG) into a definite-clause grammar (DCG) using a
finite set of examples, each of which is a pair $\langle s,~m\rangle$,
where $s$ is a sentence belonging to the language defined by the CFG and
$m$ is a semantic representation (meaning) of $s$.  The resulting DCG would
be such that it can be executed (by the interpreter of a logic programming
language) to compute the semantics for every sentence of the original DCG.
Three important assumptions underlie our approach: (i) the semantic
representation language is the {\it simply typed $\lambda$-calculus}; (ii)
the semantic representation of a sentence can be obtained from the semantic
representations of its parts ({\it compositionality}); and (iii) the
structure of the semantic representation determines its meaning ({\it
intensionality}).  The basic technique involves an enumeration of parse
trees for sentences of increasing size; and, for each parse tree, a set of
equations over (typed) function variables that represent the meanings of
the constituent subtrees is formulated and solved by means of a
higher-order unification procedure.  The solutions for these function
variables serve to augment the original grammar in order to derive the
final DCG.  A technique called {\it partial execution} is used to convert,
where possible, the generated higher-order DCG into a first-order DCG, to
facilitate efficient bidirectional execution.  In the appendix, we provide
detailed illustration of the use of such a system for storing and
retrieving information contained in natural language sentences.  Based on
our experimentation, we conclude that an improved version of this system
will facilitate rapid prototyping of natural language front-ends for
various applications.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/91-11.ps.Z
%A B. Jayaraman
%T The SuRE Programming Framework
%R 91-11
%D 1991
%I Department of Computer Science, SUNY Buffalo
%X The goal of this work is the design of logic programming
constructs that will help minimize the dependence on extralogical features.
Towards this end, we explore in this paper three forms of logical
assertions---equational, relational, and subset assertions---accompanied
by corresponding  matching and unification operations.
The uses of equations and relations are well-known, and hence
we concentrate on the uses of the subset assertion.
In an earlier paper, we introduced three forms of this assertion:
$f(terms) \supseteq expr$,
$f(terms) \supseteq set$, and
$f(terms) \supseteq set$ {\tt :-} {\it condition}.
In this paper, we show how subset assertions can be used to
re-formulate a relation as a set-valued function,
thereby declaratively specifying which arguments of the relation
are the input arguments  and thus obviating {\it mode} declarations.
We also present  two new ways of invoking a function
defined by subset  assertions:
using an  element goal, $f(terms) \ni x$,
to incrementally generate elements;
and using a  superset goal,
$f(terms) \supseteq s$, to lazily generate the entire set.
We show how lazy generation of the set
allows the search space to be pruned  declaratively,  obviating some
uses of the {\it cut},  while
eager generation of the set corresponds to the {\it setof} construct of Prolog.
Subset assertions can also help efficiently formulate  transitive-closure
operations,  which are traditionally expressed using {\it assert}
and {\it retract}.
In this paper, we also consider the  respective duals of the foregoing three
forms of subset assertions:
$f(terms) \subseteq expr$,
$f(terms) \subseteq set$, and
$f(terms) \subseteq set$ {\tt :-} {\it condition}.
We show that these new logical forms also have very natural uses,
especially in flow analysis problems.
The programming framework embodying all these features is
called {\bf SuRE}---for  {\bf Su}bsets, {\bf R}elations,
and {\bf E}quations---and is being implemented with a view to answer
the question:  can logic programming be declarative and practical?
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/91-12.ps.Z
%A A. Selman
%T A Taxonomy of Complexity Class of Functions
%R 91-12
%D June 1992
%I Department of Computer Science, SUNY Buffalo
%X This paper comprises a systematic comparison of 
several complexity classes of functions that are 
computed nondeterministically in polynomial time or
with an oracle in NP. There are three components to this
work.
 - A taxonomy is presented that demonstrates all known
inclusion relations of these classes. For (nearly)
each inclusion that is not shown to hold, evidence
is presented to indicate that the inclusion is false. 
As an example, consider FewPF, the class of multivalued
functions that are nondeterministically computable in
polynomial time such that for each {\em x}, there is a
polynomial bound on the number of distinct output values
of $f(x)$. We show that ${\rm FewPF} \subseteq {\rm PF}^{{\rm NP}}_{tt}$.
However, we show ${\rm PF}^{{\rm NP}}_{tt} \subseteq $ FewPF if and 
only if NP = co-NP, and thus ${\rm PF}^{{\rm NP}}_{tt} \subseteq
{\rm FewPF}$ is likely to be false.
 - Whereas it is known that ${\rm P}^{{\rm NP}}(O(\log n)) = 
{\rm P}^{{\rm NP}}_{tt} \subseteq{\rm  P}^{{\rm NP}}$ [Hem87, Wagb, BH88], we
show that ${\rm PF}^{{\rm NP}}(O(\log n)) ={\rm PF}^{{\rm NP}}_{tt}$ implies
P = FewP and R = NP. Also, we show that ${\rm PF}^{{\rm NP}}
_{tt} = {\rm PF}^{{\rm NP}}$ if and only if
${\rm P}^{{\rm NP}}_{tt} ={\rm  P}^{{\rm NP}}$.
 - We show that if every nondeterministic 
polynomial-time multivalued function has a single-valued
nondeterministic refinement (equivalently, if every
honest function that is computable in polynomial-time
can be inverted by a single-valued nondeterministic
function), then there exists a disjoint pair of   
NP-complete sets such that every separator is NP-hard. 
The latter is a previously studied open problem that is 
closely related to investigations on promise problems.
This results motivates a study of reduction between   
partial multivalued functions.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/91-13.ps.Z
%A S. Shapiro
%A H. Chalupsky
%A H. Chou
%T Connecting ARC/INFO and SNACTor Project Report
%R 91-13
%D June 1992
%I Department of Computer Science, SUNY Buffalo
%X This report describes an interface between ARC/INFO, a geographic
information management system, and SNACTor, the SNePS acting component.  It
also shows a small example interaction that demonstrates how ARC/INFO and
SNACTor can interact using the new interface, and how more sophisticated
future applications can make use of its functionality.  The interface was
designed and implemented during a two-month research project carried out in
Summer, 1990 at the State University of new York at Buffalo.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/91-14.ps.Z
%A A. Shende
%T Digital Analog Simulation of Uniform Motion in Representations of Physcial N-Space by Lattice-work MIMD Computer Architectures
%R 91-14
%D April 1991
%I Department of Computer Science, SUNY Buffalo
%X This doctoral dissertation is part of an ongoing research project with John
Case, Dayanand S. Rajan and myself.  We are investigating the possibility
of solving problems in scientific computing involving the motion of objects
in a bounded region of physical {\em n}-space by (a) representing points in
the region of space by processors in a lattice-work mesh of processors with
local connections for inter-processor communication, and (b){\em literally,
analogically} simulating the motion of objects by representing the
particles of these objects by algorithms which move themselves about in the
lattice-work processors, much as the motion in real space of the particles
making up real objects, in effect, constitutes the motion of those objects.
	The main contributions of this dissertation are (i) two provably
correct algorithms to generate {\em virtually perfectly shaped} spherical
wavefronts emanating from a point source at virtually constant radial
speed, (ii) a provably correct algorithm template for simulating the
unform-speed traversal of certain curves by a single particle, and (iii)
for the two algorithms of (i) and the template of (ii), the specification
of classes of meshes which are excellent discrete approximations to bounded
regions of euclidean {\em n}-space, and on which these algorithms can be
executed.  Algorithms for the simulation of uniform-speed translation and
uniform angular speed revolution of single particles are presented as
specific instances of the algorithm template.  A {\em crucial} aspect of
{\em all} the algorithms in this dissertation is that, in spite of the
discreteness of the representation of physical {\em n}-space, the {\em
speed} of the simulations, using these algorithms, is approximately {\em
linearly proportional} to the speed of the phenomenon being simulated.
Furthermore, all decisions made by processors running these algorithms are
based only on {\em local} information, such as messages from neighboring
processors.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/91-16.ps.Z
%A X. He
%T Parallel Algorithm for Cograph Recognition with Applications (Revised)
%R 91-16
%D June 1991
%I Department of Computer Science, SUNY Buffalo
%X We present a parallel algorithm for recognizing cographs and constructing
their cotrees. The algorithm takes $O(\log^2 n)$ time with $O(n+m)$
processors on a CRCW PRAM, where $n$ and $m$ are the number of vertices and
edges of the graph. Using cotree representation, we obtain parallel
algorithms for solving the maximum matching and the permutation
representation problems for cographs using $O(\log n)$ time with $O(n)$
processors. We also obtain a parallel algorithm for the depth-first
spanning tree problem for permutation graphs (a class properly contains
cographs) which takes $O(\log^2 n)$ time with $O(n)$ processors.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/91-18.ps.Z
%A K. Regan
%A T. Schwentick
%T On the Power of One Bit of a Number-P Function
%R 91-18
%D June 5, 1992
%I Department of Computer Science, SUNY Buffalo
%X We introduce the class $MP$ of languages $L$ which can be solved in
polynomial time with an oracle for one selected bit of the value $f(y)$ of
a $\# P$ function $f$ on a selected argument $y$.  This extends the
much-studied language classes $\oplus P$ and $PP$, which correspond to the
power of the least and most significant bits, respectively.  We show that
$MP$ is captured by the power of the middle bit; namely: a language $L$ is
in $MP$ iff for some $\# P$ function $f'$ and all $x$, $x \in L \iff$ the
middle bit of $f'(x)$ in binary notation is a `1'.  Also, S.~Toda's proof
[Toda89,91] that the polynomial hierarchy ($PH$) is contained in $P^{\# P}$
actually gives: $PH \subseteq BP[\oplus P] \subseteq C \oplus P \subseteq
MP$.
	The class $MP$ has complete problems, and is closed under
complements and under polynomial-time many-one reducibility.  We show that
$MP$ is closed under intersection iff, for any fixed $k > 0$, $k$ bits of a
$\# P$ function are no more powerful than one.  Moreover, if there is a
polynomial-time construction for the closure under intersection, then
$O(\log n)$-many bits have the same power as one.  We examine the subclass
$AmpMP$ of languages whose $MP$ representations can be ``amplified,''
showing that $BP[\oplus P] \subseteq AmpMP$, and that for any subclass
$\cal C$ of $AmpMP$ which is closed under polynomial-time many-one
reducibility, $MP^{\cal C} = MP$.  Hence $BP[\oplus P]$ is low for $MP$,
and if $C_{=}P \subseteq AmpMP$, then $PP^{PP} = MP$.  Finally, our work
leads to a purely mathematical question about the size of integer-valued
polynomials $p(x,y)$ which satisfy certain congruence relations, one which
also matters to the theory of bounded-depth circuits.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-03.ps.Z
%A S. Revankar
%A D. Sher
%T Supervised Image Segmentation
%R 92-03
%D 1992
%I Department of Computer Science, SUNY Buffalo
%X Image segmentation is a process of dividing an image into meaningful
regions.  Human supervision of computer generated segmentation is essential
for the tasks such as medical image analysis, surveillance-radar image
analysis, etc. The decisions to be made based on the segmented images is
often critical in such domains, and neither human imprecision nor the
unreliability of automatic algorithms is tolerable.  We develop a
collaborative scheme that facilitates active human supervision of computer
segmentation of images. Through this, we complement the reliability of a
human expert with the precision of segmentation and deformable modeling
algorithms that track a stationary or moving nonrigid object in a sequence
of snap-shots of the scene.
	In our system, an expert user observes the computer generated
segmentaion along with the original images in a user friendly graphics
environment, and interactively indicates the wrongly classified regions
either by {\it pointing} or by {\it circling.} The precise boundaries of
indicated regions are computed by studying original image properties at
that region, and the human visual {\it attention distribution map} obtained
from the published psychological and psychophysical research. All
refinement operations are defined in the form of a linear functions so that
the facilities such as error recovery, commutativity of operations, etc.
are inherent to the system.
	We use the developed theory to extract heart walls from a sequence
of two dimensional echocardiograms.  Echocardiography is one of the most
popular and safest methods of observing the heart.  Segmentation of the
heart wall in echocardiograms is essential to detect and quantize a large
spectrum of heart abnormalities.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-09.ps.Z
%A E. Chuang
%A D. Sher
%T Tests for Feature Detection
%R 92-09
%D 1992
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-10.ps.Z
%A E. Chuang
%A D. Sher
%T Evidence Representation & Combination in Low-level Vision
%R 92-10
%D 1992
%I Department of Computer Science, SUNY Buffalo
%X We discuss using statistics from $\chi^{2}$ tests and summing rule$^{(1)}$
for evidence representation and combination in low-level vision. Feature
likelihoods are represented by statistics which are assured from $\chi^{2}$
tests, and the relationship between these likelihoods and noise level is
linear. Multiple statistics are combined by summing rule into statistics
for larger features such as segments and polygons. This is justified
because the sum on a set of independent $\chi^{2}$ random variables is also
a $\chi^{2}$ random variable, and the geometric meaning of the sum is equal
to the integration of these addends.  Examples show that the performance of
Hough transform for detecting line segments is significantly improved if
statistics are included.  Therefore, statistics which adopted as evidence
can not only include uncertainty, but also help to avoid error in Hough
transform. The summing rule can combine statistics from $\chi^2$ tests, and
also can integrate the meanings of addends into the sum.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-12.ps.Z
%A Tin Kam Ho
%T A Theory of Multiple Classifier Systems and Its Application to Visual Word Recognition
%R 92-12
%D May 1992
%I Department of Computer Science, SUNY Buffalo
%X Despite the success of many pattern recognition systems in constrained
domains, problems that involve noisy input and many classes remain
difficult.  A promising direction is to use several classifiers
simultaneously, such athat they can complement each other in correctness.
This thesis is concerned with decisions combination in a multiple classifer
system that is critical to its success.  A multiple classifer system
consists of a set of a set of classifers and a decision combination
function.  It is a preferred solution to a complex recognition problem
because it allows simultaneous use of feature descriptors of many types,
corresponding measures of similarity, and many classification procedures.
It also allows dynamic selection, so that classifiers adapted to inputs of
a particular type many be applied only when those inputs are encountered.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-13.ps.Z
%A S.Revankar
%A D.Sher
%T Pattern Extraction by Adaptive  Propagation of a Regional Threshold
%R 92-13
%D 1992
%I Department of Computer Science, SUNY Buffalo
%X We describe a mehtod for pattern extraction by adaptive
propogation of regional threshold to the rest of the image.  In most
imageds there is an easily thresholded region.  We propagate the
regional threshold to the entire image using a linear 
approximation of the image intensity gradients.  This method is useful in the
fields where the extraction of precise geometry of the binarized 
patterns is required, and in the fileds where continuous thin lines
are to be extracted.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-14.ps.Z
%A S. Khoubyari
%T The Application of Word Image Matching in Text Recogntion
%R 92-14
%D 1992
%I Department of Computer Science, SUNY Buffalo
%X Printed text recognition usually involves recognizing individual 
characters, and assembling the results to recognize words and sentences.
However, the performance of conventional character recognition systems
tends to suffer in the presence of moderate levels of deradation in
the text.
	A method is proposed that uses equivalence among frequent word images
to derive hypotheses for each word using the available language 
statistics.  Word images are clustered to determine equivalency.
The attributes of the clusters, as well as the relationships among them
matched with the same characteristics for the words in the language.
The method requires no explicit training, and is fairly tolerant to
image degradation.  The results for several sample sizes are reported.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-18.ps.Z
%A S. Sunder
%A X He
%T An NC Algorithm 
for Finding Minimum Weighted Completion Time Schedule on Series Parallel Graphs
%R 92-18
%D 1992
%I Department of Computer Science, SUNY Buffalo
%X We present a parallel algorithm for solving the minimum weighted
completion time scheduling problem for transitive series parallel 
graphs. The algorithm takes $O(\log^2 n)$ time with $O(n^3)$ processors
on a CREW PRAM, where $n$ is the number of vertices of the input graph.
This is the first NC algorithm for solving the problem.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-21.ps.Z
%A H. Hexmoor
%A J. Lammens
%A S. Shapiro
%T An Autonomous Agent Architecture for Integrating Perception and Acting with Grounded Embodied Symbolic Reasoning
%R 92-21
%D 1992
%I Department of Computer Science, SUNY Buffalo
%X We describe an agent architecture in terms of general principles of
organization of components for an autonomous agent that functions in the
world.  The architecture is described independent of computational
components of the agent that are used to generate behavior.  It specifies
an integration of explicit representation and reasoning mechanisms,
embodied semantics through grounding symbols in perception and action,
mechanisms for finding and maintaining a correspondence between symbols and
sensory-perceived objects, and implicit representation of special-purpose
mechanisms of sensory processing, perception, and motor control for the
agent.  We then present components that we place in our general
architecture to build an agent that exhibits situated activity and
learning.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-22.ps.Z
%A D. Milun
%A D. Sher
%T Improving 
Edge Detectors on Compressed Images-A Trainable Markov Random Field Approach
%R 92-22
%D 1992
%I Department of Computer Science, SUNY Buffalo
%X We use Markov random fields to improve the output of the thinned Sobel
edge detector, applied to images compressed using the JPEG technique.
JPEG compression saves a lot of file space, however it introduces 
correlated errors into the images.  This is exactly a circumstnace for
which our recently developed double neighborhood MRFs are suited
(Milun and Sher, 1992a).  Double neighborhood MRFs are constructed by 
sampling from pairs of  original images together with noisy imagery
This models the noise within the MRF probability density function without
having to make assumptions about its form.  This provides an easy to
generate Markov random fields for annealing or other relaxation
methods.  
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-24.ps.Z
%A A. Jagota
%A K. Regan
%T Performance
of MAX-CLIQUE Approximation Heuristics Under Description-Length Weighted Distributions
%R 92-24
%D 1992
%I Department of Computer Science, SUNY Buffalo
%X We study the average performance of several neural-net heuristics applied
to the problem of finding the size of the largest clique in an undirected
graph.  This function is NP-hard even to approximate within a constant
factor in the worst case [S.~Arora, C.~Lund, et.al., FOCS '92], but the
heuristics we study are known to do quite well on average for instances
drawn from the uniform distribution on graphs of size $n$.  We extend a
theorem of M. Li and P. Vitanyi [Info. Proc. Lett. 5/92] to show that for
instances drawn from the {\em universal distribution\/} m(x), the
average-case performance of any approximation algorithm has the same order
as its worst-case performance.  The universal distribution is not
computable or samplable.  However, we give a realistic analogue q(x)
which lends its elf to efficient empirical testing.  Our results so far
are: out of nine heuristics we tested, three did markedly worse under
q(x) than under uniform distribution, but six others revealed little
change.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-25.ps.Z
%A S. Revankar
%A D. Sher
%T Constrained Contouring in the Polar Coordinates
%R 92-25
%D 1992
%I Department of Computer Science, SUNY Buffalo
%X A constrained contour is an outline of a region of interest, obtained by
linking edges under the constraints of connectivity, smoothness, image
context and subjective or user specified constraints.   We discuss a
constrained contouring algorithm in polar coordinates to trace closed
contours.  The algorithm finds optimal contour locations in all radial
direction according to the constraintsof context, distance from 
approximate contour and the image gradient.  These optimal edge-points
ordered according to their angular coordinates.  We treat these edge 
points as the nodes of a graph of all possible contours.  The links of
the graph are weighted so that the shortest path between a poir of nodes
is the smooth contour that traces maximum number of edge-points, and the 
shortest cycle in the graph gives the optimal contour.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-26.ps.Z
%A S. Chakravarty
%A Y. Gong
%T An Algorithm for Diagnosing Two-Line Bridging Faults in Combinational Circuits
%R 92-26
%D 1992
%I Department of Computer Science, SUNY Buffalo
%X A novel algorithm for diagnosing all {\em Two-Line Bridging Faults} in
Combinational Circuits is presented. It assumes the
{\em Wired-OR (Wired-AND)} model and uses: SOPS to represent the set of
possible bridging faults making it space efficient;
and a set of rules for dropping faults from
the set of possible faults. The rules use {\em fault
dictionaries}, not for bridging faults but, for {\em stuck-at} faults only.
Experimental results point to: the
computational feasibility of considering all two-line bridging faults
while diagnosing combinational circuits; and the effectiveness of the 
algorithm.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-27.ps.Z
%A F. Green
%A J. Kobler
%A K. Regan
%A T. Schwentick
%A J. Toran
%T The Power of the middle Bit of Number-P Function
%R 92-27
%D 1992
%I Department of Computer Science, SUNY Buffalo
%X We introduce the class MP of languages $L$ which can be solved in
polynomial time with the additional information of one bit from a $\#$P
function $f$. We prove that the polynomial hierarchy and the classes
$\mbox{\it Mod}_k\mbox{P}$, $k\geq2$, are low for this class.  We show that
the middle bit of $f(x)$ is as powerful as any other bit, and that a wide
range of bits around the middle have the same power.  By contrast, the
$O(\log n)$ many least significant bits are equivalent to $\oplus$P
[R.~Beigel, J.~Gill, and U.~Hertrampf, 1990], and we show that the $O(\log
n)$ many most significant bits are equivalent to PP; hence these bits are
probably weaker.  We study also the subclass AmpMP of languages whose MP
representations can be ``amplified,'' showing that
$\mbox{BPP}^{\oplus{\mbox{P}}}\subseteq \mbox{AmpMP}$, and that important
subclasses of AmpMP are low for MP.  We translate some of these results to
the area of circuit complexity using MidBit (middle bit) gates.  A MidBit
gate over $w$-many inputs $x_1,\dots,x_w$ is a gate which outputs the value
of the $\lfloor\log(w)/2\rfloor^{\rm th}$ bit in the binary representation
of the number $\sum_{i=1}^wx_i$.  We show that every language in ACC can be
computed by a family of depth-2 deterministic circuits of size $2^{(\log
n)^c}$ with a MidBit gate at the root and AND-gates of fan-in $(\log n)^c$
at the leaves. This result improves the known upper bounds for the class
ACC.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-30.ps.Z
%A D. Sher
%A K. Regan
%T Constructing Noise-Reducing Operators from Training Images
%R 92-30
%D 1992
%I Department of Computer Science, SUNY Buffalo
%X We discuss constructing non-linear noise reduction images.  We extract from
the training set a probability distribution over local neighborhoods.  Our
operator changes pixel values when such a change turns a low
probability neighborhood into high probability one.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-32.ps.Z
%A S. Chakravarty
%A G. Theophilopoulos
%T Computing Robust Test for Stuck-open Faults from Stuck-at Test Sets
%R 92-32
%D 1992
%I Department of Computer Science, SUNY Buffalo
%X An experimental system for computing robust tests for stuck-open faults in
static CMOS circuits is presented.  It constructs robust test-pairs from
tests for stuck-at faults by identifying several classes of FETs.  Robust
tests for stuck-open faults in FETs belonging to these classes are
constructed from any stuck-at test set by carefully constructing
initialization vectors.  Initialization vectors are constructed by
examining the "parity" of the paths in the circuit.  Robust tests for
additional faults are identified using stuck-open fault simulaton.
Experimental results show that the system can compute robust tests for a
"very large" percentage of the stuck- open faults in a "reasonable" time.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-33.ps.Z
%A A. Jagota
%T Approximating Maximum Clique with a Hopfield Network
%R 92-33
%D 1992
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-01.ps.Z
%A R. Sarnath 
%T Doubly logarithmic time parallel sorting 
%R 93-01
%D 1993 
%I Department of Computer Science, SUNY Buffalo
%X Recently, attempts have been made to separate the
problem of parallel sorting from that of list ranking, in order to get
around the well known $\Omega (\log n/\log \log n)$ lower bound. These
approaches have been of two kinds - {\em chain sorting} and {\em padded
sorting}. Here we present nearly optimal, comparison based {\em padded}
sorting algorithms that run in average case time $O(\frac{1}{\epsilon ^2} +
\frac{1}{\epsilon } \log \log n)$ using $n^{1+\epsilon }$ processors, and
$O(n^{1+\epsilon })$ space, on an Common CRCW PRAM.From these results,
algorithms for {\em chain sorting} within the same time and processor
bounds can be easily obtained.  Using a similar approach, we also give an
$O(1)$ average case time, comparison based algorithm for finding the
largest of $n$ items using a linear number of processors.  The algorithm
for finding the maximum, runs on a Common CRCW PRAM using only $n^{3/4}$
cells of shared memory. Finally, we obtain randomised algorithms for these
problems that run on Common/Tolerant CRCW PRAMs, and also satisfy the above
time and processor bounds. As a consequence of our results on sorting, we
can also show that the problem of sorting arbitrary input sequences can be
reduced to that of sorting integer inputs, within the above mentioned time
and processor bounds.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-02.ps.Z
%A R. Sarnath
%T Lower bounds for padded sorting and approximate counting
%R 93-02
%D 1993
%I Department of Computer Science, SUNY Buffalo
%X We examine the relationship between running time and error of parallel
sorting algorithms.  This is done by applying Hastad's main lemma to relate
the size depth and error of simple circuits, that sort an input of 0's and
1's.  As a consequence, we obtain lower bounds for approximate counting as
well.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-04.ps.Z
%A Sreejit Chakravarty
%A Suresh Sivaprakasm
%T I_DDQ Measurement Based Diagnosis of Bridging Faults in Full
%R 93-04
%D 1993
%I Department of Computer Science, SUNY Buffalo
%X An algorithm  for diagnosing two  node bridging faults  in static CMOS
combinational  circuits(full  scan  circuits)    is  presented.   This
algorithm uses results from $I_{DDQ}$   testing.  The bridging  faults
considered can be between nodes that are outputs of a gate or internal
nodes of a  gates. Experimental results  on all the  ISCAS89  circuits
show that $I_{DDQ}$ measurement  based diagnosis using a small number
of randomly generated vectors,   is  very effective. Moreover, it  is
computationally feasible to diagnose  such a large  number of bridging
faults when $I_{DDQ}$ measurement is used.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-05.ps.Z
%A Stephen Fenner
%A Steve Homer
%A Mitsunori Ogiwara
%A Alan L. Selman
%T On Using Oracles That Compute Values
%R 93-05
%D 1993
%I Department of Computer Science, SUNY Buffalo
%X This paper focuses on complexity classes of partial functions that are
computed in polynomial time with oracles in NPMV, the class of all
multivalued partial functions that are computable nondeterministically
in polynomial time.  Concerning deterministic polynomial-time
reducibilities, it is shown that
\begin{enumerate}
\item
        A multivalued partial function is polynomial-time computable
        with $k$ adaptive queries to NPMV if and only if
        it is polynomial-time computable via
        $2^k-1$ nonadaptive queries to NPMV.
\item
        A characteristic function is polynomial-time computable
        with $k$ adaptive queries to NPMV if and only if
        it is polynomial-time computable with
        $k$ adaptive queries to NP.
\item
        Unless the Boolean hierarchy collapses,
        for every $k$, $k$ adaptive (nonadaptive) queries to NPMV is
        different than $k+1$ adaptive (nonadaptive) queries to NPMV.
\end{enumerate}
Nondeterministic reducibilities, lowness and the difference hierarchy
over NPMV are also studied.  The difference hierarchy for partial functions
does not collapse unless the Boolean hierarchy collapses, but, surprisingly,
the levels of the difference and bounded query hierarchies
do not interleave (as is the case for sets) unless the polynomial hierarchy
collapses.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-09.ps.Z
%A Sreejit Chakravarty
%A Yiming Gong
%T A Diagnostic Simulation Algorithm for Stuck-at Faults in Combinational Circuits
%R 93-09
%D 1993
%I Department of Computer Science, SUNY Buffalo
%X Two  faults are  said to be equivalent, w.r.t a test set $T$, iff they
cannot  be  distinguished  by  any  test  in  $T$.  The  sizes of  the
equivalence  classes are used as  a basis for comparing the diagnostic
capability of two given test sets. A novel algorithm for computing the
Equivalence Classes  of  stuck-at  faults, in combinational(full scan)
circuits, w.r.t a  given test set is  presented.  Experimental results
presented show the algorithm to be more  time and space efficient than
a previously known algorithm.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-10.ps.Z
%A H. Hexmoor
%A J. Lammens
%A S. C. Shapiro
%T Embodiment in GLAIR: A Grounded Layered Architecture with Integrated Reasoning for Autonomous Agents
%R 93-10
%D February 1993
%I Department of Computer Science, SUNY Buffalo
%X In order to function robustly in the world, autonomous agents need to
assimilate concepts for physical entities and relations, grounded in
perception and action. They also need to assimilate concepts for perceptual
properties like color, shape, and weight, and perhaps eventually even for
nonphysical objects like unicorns. The process of acquiring concepts that
carry meaning in terms of the agent's own physiology we call embodiment.
Unlike current robotic agents, those endowed with embodied concepts will
more readily understand high level instructions. As a consequence, these
robots won't have to be instructed at a low level.  We have developed an
autonomous agent architecture that facilitates embodiment of action and
perception, and accommodates embodied concepts for both physical and
nonphysical objects, properties, and relations.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-13.ps.Z
%A J. Lammens
%A H. Hexmoor
%A S. C. Shapiro
%T Of Elephants and Men
%R 93-13
%D April 1993
%I Department of Computer Science, SUNY Buffalo
%X In his "elephant paper", Brooks criticized the ungroundedness of traditional
symbol systems, and proposed physically grounded systems as an alternative,
in particular the subsumption architecture. Although we are still
struggling with many of the issues involved, we believe we have some
contributions to make towards solving some of the open problems with
physically grounded systems, particularly with respect to whether or how to
integrate the old with the new. In this paper we describe an agent
architecture that specifies an integration of explicit representation and
reasoning mechanisms, embodied semantics through grounding symbols in
perception and action, and implicit representations of special-purpose
mechanisms of sensory processing, perception, and motor control. We then
present components that we place in our general architecture to build
agents that exhibit situated activity and learning, and finally a physical
agent implementation and two simulation studies. The gist of our paper is
that the Brooksian behavior generation approach goes a long way towards
modeling elephant behavior, which we find very interesting, but that in
order to model more deliberative behavior we may also need something else.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-15.ps.Z
%A H. Hexmoor
%A J. Lammens
%A G. Caicedo
%A S. C. Shapiro
%T Behavior Based AI, Cognitive Processes, and Emergent Behaviors in Autonomous Agents
%R 93-15
%D April 1993
%I Department of Computer Science, SUNY Buffalo
%X Behavior based AI has questioned the need for modeling intelligent agency
using generalized cognitive modules for perception and behavior generation.
Behavior based AI has demonstrated successful interactions in unpredictable
environments in the mobile robot domain. This has created a gulf between
"traditional" approaches to modeling intelligent agency and behavior based
approaches. We present an architecture for intelligent autonomous agents
which we call GLAIR (Grounded Layered Architecture with Integrated
Reasoning). GLAIR is a general multi-level architecture for autonomous
cognitive agents with integrated sensory and motor capabilities. GLAIR
offers an "unconscious" layer for modeling tasks that exhibit a close
affinity between sensing and acting, i.e., behavior based AI modules, and a
"conscious" layer for modeling tasks that exhibit delays between sensing
and acting. GLAIR provides learning mechanisms that allow for autonomous
agents to learn emergent behaviors and add them to their repertoire of
behaviors. In this paper we will describe the principles of GLAIR and some
systems we have developed that demonstrate how GLAIR based agents acquire
and exhibit a repertoire of behaviors at different cognitive levels.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-16.ps.Z
%A S. Ali
%A S. C. Shapiro
%T Natural Language Processing Using a Propositional Semantic Network with Structured Variables
%R 93-16
%D May 7, 1993
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-18.ps.Z
%A S. Sivaprakasam
%T Performance Enhancements in SunOS NFS
%R 93-18
%D May 1993
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-20.ps.Z
%A Joongmin Choi
%T Experience-Based Learning In Deductive Reasoning Systems
%R 93-20
%D 1993
%I Department of Computer Science, SUNY Buffalo
%X General knowledge is widely applicable, but relatively slow to apply to any
particular situation. Specific knowledge can be used rapidly where it
applies, but is only narrowly applicable.  We present an automatic scheme
to migrate general knowledge to specific knowledge during reasoning.  This
scheme relies on a nested rule representation which retains the rule
builder's intentions about which of the possible specializations of the
rule will be most useful.  If both general and specific knowledge is
available and applicable, a system may be slowed down by trying to use the
general knowledge as well as, or instead of, the specific knowledge.
However, if general knowledge is purged from the system after migration,
the system will lose the flexibility of being able to handle different
situations.  To retain the flexibility without paying the price in speed, a
shadowing scheme is presented that prevents general knowledge from being
used when specific knowledge migrated from it is available and applicable.
The combination of knowledge migration and knowledge shadowing allows a
deductive reasoning system to learn from and exploit previous experience.
Experience is represented by the instance relationship between the general
knowledge and the specific knowledge migrated from it.
	We also present techniques for implementing efficient rules of
inference in natural deduction systems by caching and recalling the history
of rule activation steps that alleviate duplicate pattern matchings and
binding conflict resolutions.  To reduce the complexity of manipulating
rule activation steps from exponential to polynomial, methods of
distributing the information about rule activation steps are developed that
minimize the number of activation steps and the number of substitution
compatibility tests among shared variables.
	An implementation of these schemes in a network-based reasoning
system is discussed.  Test results are shown that demonstrate the predicted
benefits of these ideas.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-07.ps.Z
%A Wei Shu
%A Min-You Wu
%T Sparse Implementation of Revised Simplex Algorithms on Parallel Computers
%R 93-07
%D July 01, 1993
%I Department of Computer Science, SUNY Buffalo
%X Parallelizing sparse simplex algorithms is one of the most
challenging problems.
Because of very sparse matrices and very heavy communication, the ratio
of computation to communication is extremely low.
It becomes necessary to carefully select parallel algorithms,
partitioning patterns, and communication optimization to achieve a speedup.
Two implementations on Intel hypercubes are presented in this paper.
The experimental results show that a nearly linear speedup can be obtained
with the basic revised simplex algorithm.
However, the basic revised simplex algorithm has many fill-ins.
We also implement a revised simplex algorithm with LU decomposition.
It is a very sparse algorithm, and it is very difficult to achieve a speedup
with it.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-24.ps.Z
%A Kenneth W. Regan
%T Efficient Reductions from NP to Parity Using Error-Correcting Codes  (preliminary version)
%R 93-24
%D June 12, 1993
%I Department of Computer Science, SUNY Buffalo
%X This paper proves that every language in NP is recognized by an
RP[$\oplus$P] machine whose time complexity is quasilinear, apart from
the time to verify witnesses.  The results significantly improve the
number of random bits, success probability, and running time of
Valiant and Vazirani's original construction
({\em Theor. Comp. Sci.\/} 47:85--93, 1986), and
beat both the $2n$ random bits and time/success tradeoff in subsequent
methods based on universal hashing.
Questions of further
improvements are connected to open problems in the theory
of error-correcting codes.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-25.ps.Z
%A Kenneth W. Regan
%T On the Difference Between Turing Machine Time Random-Access Machine Time
%R 93-25
%D July 12, 1993
%I Department of Computer Science, SUNY Buffalo
%K Computational complexity, theory of computation, Turing machines, random-access machines, models, simulation, finite automata
%X We introduce a model of computation called the {\em Block Move\/}
(BM) model.  The BM extends the {\em Block Transfer\/} (BT) model of
Aggarwal, Chandra, and Snir (FOCS 1987),
who studied time complexity under various
{\em memory access cost functions\/}
ranging from
$\mu_1(a) := a$ to $\mu_{\log}(a) := \ceiling{\log_2 a}$.
We show that up to factors of $\log t$ in the total running time $t$,
BMs under $\mu_1$ are equivalent to multitape Turing machines,
and BMs under $\mu_{\log}$ are equivalent to log-cost RAMs.
We also prove that for any well-behaved $\mu$ the BM classes
$\dmutime[t(n)]$ form a tight deterministic time hierarchy.
Whether there is any hierarchy
at all when $\mu$ rather than $t$ varies is tied to
long-standing open problems
of determinism vs. nondeterminism.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-26.ps.Z
%A Mauricio Osorio
%A Bharat Jayaraman
%T Subset  Assertions and Negation-As Failure
%R 93-26
%D July 27, 1993
%I Department of Computer Science, SUNY Buffalo
%K Subset Assertions, Transitive Closures, Memoization,
Negation-As-Failure,  Stratified Semantics, Well-Founded Semantics 
%X Subset assertions provide a declarative and natural means for expressing 
solutions to many problems involving sets.  This paper is motivated
by the use of subset assertions for formulating transitive closures
and solving containment constraints in applications of graph
traversal and program analysis.   In these applications, circular
containment constraints may arise, for which we propose an operational
strategy based upon memoization and reexecution of function calls.
We provide formal declarative and operational semantics for
this class of subset assertions.  One of the main technical results
of this paper is a succinct translation of subset assertions into normal 
program clauses [L87] such that the stratified semantics of the resulting
normal programs coincides with the declarative semantics of 
subset assertions.  This translation is interesting because the operational
semantics of subset assertions appears to be very different from that of normal
programs---due to the setof-like capability and the need 
of reexecution for subset assertions, both of which are absent in normal 
program clauses.  (However this translation is not an acceptable 
implementation of subset assertions due to its inefficiency.) 
We also discuss the connection between our proposed declarative
semantics and recent approaches such as stable and well-founded
semantics.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-21.ps.Z
%A Edith Hemaspaandra
%A Ashish V. Naik
%A Mitsunori Ogiwara
%A Alan L. Selman
%T P-Selective Sets, and Reducing Search to Decision vs. Self-Reducibility
%R 93-21
%D July 30, 1993
%I Department of Computer Science, SUNY Buffalo
%K computational, complexity, complexity classes, p-selective sets, self-reducible, search reduces to decision, proof systems, cheatable, left cuts
%Y F.1.2   Modes of Computation
 F.1.3   Complexity Classes (see also F.2)
%X We obtain several results that distinguish self-reducibility of a
language $L$
with the question of whether search reduces to decision for $L$.
These include:
\begin{itemize}
\item[(i)]
If ${\rm NE} \ne {\rm E}$, then there exists a set $L$ in ${\rm NP} - {\rm P}$
such that search reduces to decision for $L$, search does {\em not}
nonadaptively reduces to decision for $L$,  and $L$ is {\em not}
self-reducible.
\item[(ii)]
If ${\rm UE} \ne {\rm E}$, then there exists a  language 
$L \in {\rm UP} - {\rm P}$ such
that search nonadaptively reduces to decision for $L$, but $L$ is
{\em not}
self-reducible.
\item[(iii)]
If ${\rm UE} \cap \mbox{co-{\rm UE}} \ne {\rm E}$, then there is a 
disjunctive self-reducible language
$L \in {\rm UP} - {\rm P}$ for which search does {\em not} nonadaptively
reduce to
decision.
\item[(iv)]
There is an oracle relative to which
satisfiable assignments cannot be computed
nonadaptively
relative to SAT.
\end{itemize}
We prove that if ${\rm NE} \not\subseteq {\rm BPE}$, then there
is a
language $L \in {\rm NP} - {\rm BPP}$ such that $L$ is randomly self-reducible,
{\em not} nonadaptively  randomly self-reducible, and {\em not}
self-reducible. We obtain results concerning tradeoffs in multiprover
interactive proof systems, and obtain results that distinguish
checkable
languages from those that are nonadaptively checkable.
Many of our results depend on new techniques for constructing
p-selective sets.
We obtain a p-selective set that is {\em not} 
$\leq_{tt}^{\rm P}$-equivalent to any
tally language,
and we show that if ${\rm P} = {\rm PP}$, then every p-selective set is
$\leq_T^{\rm P}$-equivalent to a tally language.  
Similarly, if ${\rm P} = {\rm NP}$, then
every
cheatable set is $\leq_m^{\rm P}$-equivalent to a tally language. 
We construct a
recursive p-selective tally set that is {\em not} cheatable.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-22.ps.Z
%A R. Sarnath
%T A Randomized Parallel Algorithm for dfa-minimization
%R 93-22
%D August 03, 1993
%I Department of Computer Science, SUNY Buffalo
%K Parallel-algorithms, Partitioning, DFA minimization
%X The problem of finding the coarsest partition of a set $S$ with respect to 
another partition of $S$ and one or more functions on $S$ has several 
applications, one of which is the state minimization of finite state 
automata. The problem has a well known $O(n \log n)$ sequential 
 algorithm. In this 
paper, we  present efficient parallel randomised algorithms for the problem.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-31.ps.Z
%A Johan M. Lammens
%A Stuart C. Shapiro
%T Learning Symbolic Names for Perceived Colors
%R 93-31
%D August 16, 1993
%I Department of Computer Science, SUNY Buffalo
%K color perception, learning, symbol grounding, vision, knowledge representation
%Y I.2.10   Vision and Scene Understanding (see also I.4.8,I.5)-Intensity, color, photometry and thresholding
 I.2.4   Knowledge Representation Formalisms and Methods
 I.2.9   Robotics-Sensors
%X We are working on a computational model of color perception and color
naming, which can be seen as an instance of symbol grounding in the domain
of color, or as an attempt to provide an artificial intelligent agent with
embodied concepts of color.  We discuss two areas of the model where
learning is used: learning a non-linear mapping between two color spaces,
and learning a relation between color space coordinates and a set of
symbolic color names. We have used a traditional error back-propagation
learning algorithm for the first problem, and are considering several
different learning paradigms for the second problem, ranging from
traditional clustering techniques to an experimental ``space warp'' method.
Using learning gives us a relatively easy way to determine a non-linear
transformation of spaces in the first case, and increases the flexibility
of the approach with respect to different application needs (and languages)
in the second case. We discuss the learning methods used or considered and
the problems encountered.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-30.ps.Z
%A R. Bar-Yehuda
%A V. Dabholkar
%A K. Govindarajan
%A D. Sivakumar
%T Randomized Local Approximations with Applications to the MAX-CLIQUE Problem
%R 93-30
%D August 17, 1993
%I Department of Computer Science, SUNY Buffalo
%X We present a heuristic scheme for finding a clique of maximum size or weight.
Our heuristic scheme uses approximation techniques for the weighted
vertex cover problem, due to R.~Bar-Yehuda and S.~Even \cite{BE85}.
The approach is based on the notion of making incremental
progress towards finding a clique.
At each step of our algorithm, one or more {\em local optimization}
techniques are attempted, and when these techniques do not make
progress, we perform {\em local approximations}.
In the local approximation step,  the algorithm selects a heuristic
from a set of heuristics, depending on the characteristics of
the graph at that stage.  Once a solution
is constructed, we attempt to iteratively refine the solution.
Randomization plays a key role at various steps of our algorithm.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-28.ps.Z
%A Lane A. Hemaspaandra
%A Ashish V. Naik
%A Mitsunori Ogiwara
%A Alan L. Selman
%T Computing Solutions Uniquely Collapses the Polynomial Hierarchy
%R 93-28
%D August 17, 1993
%I Department of Computer Science, SUNY Buffalo
%K computational complexity, complexity classes, NPSV, NPMV, refinements, selectivity, lowness, advice classes, nonuniform classes, polynomial hierarchy
%Y F.1.2   Modes of Computation
 F.1.3   Complexity Classes (see also F.2)
%X Is there a {\em single-valued\/} NP function that, when given a
satisfiable formula as input, outputs a satisfying assignment?  That is, can
a nondeterministic function cull just one satisfying assignment from a
possibly exponentially large collection of assignments?  We show that if
there is such a nondeterministic function, then the polynomial hierarchy
collapses to its second level.  As the existence of such a function is known
to be equivalent to the statement ``every multivalued NP function has a
single-valued NP refinement,'' our result provides the strongest evidence
yet that multivalued NP functions cannot be refined.
  We prove our
result via theorems of independent interest.
We say that a set $A$ is NPSV-selective (NPMV-selective) if there
is a 2-ary partial function in NPSV (NPMV, respectively) that decides which
of its inputs (if any) is ``more likely'' to belong to $A$;
this is a nondeterministic analog of the recursion-theoretic notion of
the semi-recursive sets and the extant complexity-theoretic notion of
P-selectivity.  Our hierarchy collapse follows by combining
the easy observation
that every set in NP is NPMV-selective with either of the following
two theorems that we prove:
(1)  If $A \in {\rm NP}$ is NPSV-selective, 
then $A \in ({\rm NP} \cap {\rm coNP})/{\rm poly}$. 
(2)  If $A \in {\rm NP}$ is NPSV-selective, then $A$ is Low$_2$.
To wit, either result implies that if every set in NP is NPSV-selective,
then the polynomial hierarchy collapses to its second level, 
${\rm NP}^{\rm NP}$.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-29.ps.Z
%A Lane A. Hemaspaandra
%A Albrecht Hoene
%A Ashish V. Naik
%A Mitsunori Ogiwara
%A Alan L. Selman
%A Thomas Thierauf
%A Jie Wang
%T Selectivity: Reductions, Nondeterminism, and Function Classes
%R 93-29
%D August 18, 1993
%I Department of Computer Science, SUNY Buffalo
%K computational complexity, complexity classes, NPSV, NPMV, refinements, selectivity, lowness, advice classes, nonuniform classes, polynomial hierarchy, NPSV-total, reductions, equivalences
%Y F.1.2   Modes of Computation
 F.1.3   Complexity Classes (see also F.2)
%X A set is P-selective if there is a
polynomial-time semi-decision algorithm for the set---an algorithm
that given any two strings decides which is ``more likely'' to be in
the set.  This paper studies two natural generalizations of
P-selectivity: the NP-selective sets and the sets reducible or
equivalent to P-selective sets via polynomial-time reductions.  We
establish a strict hierarchy among the various reductions and
equivalences to P-selective sets.  We show that the NP-selective sets are
in $({\rm NP} \cap {\rm coNP}/{rm Poly}$ 
are extended low, and those in NP
are Low$_2$; we also show that NP-selective sets cannot be
NP-complete unless ${\rm NP} = {\rm coNP}$.  By studying more general
notions of nondeterministic selectivity, we conclude that
all multivalued NP functions have single-valued NP refinements only if the
polynomial hierarchy collapses to its second level.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-33.ps.Z
%A Kenneth W. Regan
%T Machine Models and Linear Time Complexity
%R 93-33
%D August 16, 1993
%I Department of Computer Science, SUNY Buffalo
%K Complexity theory, machine models, simulations, complexity classes, linear time, lower bounds
%Y F.1.1   Models of Computation (see also F.4.1)-Relations among models
 F.1.3   Complexity Classes (see also F.2)-Relations among complexity classes
 F.1.3   Complexity Classes (see also F.2)-Relations among complexity measures
 F.4.3   Formal Languages (D.3.1)-Classes defined by resource-bounded automata
%X This article first surveys known results on simulations among
commonly-studied models of sequential computation, and observes that the
problem of whether these models can simulate each other in linear time
is essentially wide open.  Then the status of research on nonlinear
lower bounds for natural problems in these models is examined.
The author's {\it Block Move\/} (BM) model
[K. Regan, UBCS TR 92-28, submitted for publication] is described in
terms of an ``editing game'' on strings, with a cost parameter
$\mu$ that reflects features of real machines.
Results comparing BM and TM and RAM complexity classes are given,
an avenue for possible nonlinear lower bounds is sketched, and
several open problems are posed.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-35.ps.Z
%A Kenneth W. Regan
%T A New Parallel Vector Model, With Exact Characterizations of NC^k
%R 93-35
%D August 17, 1993
%I Department of Computer Science, SUNY Buffalo
%K Computational complexity, machine models, parallel computation, vector machines, circuits
%Y F.      Theory of Computation
 F.1.1   Models of Computation (see also F.4.1)-Unbounded-action devices (e.g., cellular automata, circuits, networks of machines)
 F.1.2   Modes of Computation-Alternation and nondetermination
 F.1.3   Complexity Classes (see also F.2)-Relations among complexity classes
%X This paper develops a new and natural parallel vector model,
and shows that for all $k \geq 1$, the languages recognizable in
$O(\log^k n)$ time and polynomial work in the model are exactly
those in NC$^k$.  Some improvements to other simulations
in the literature of parallel models and reversal complexity are given.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-34.ps.Z
%A Ashish V. Naik
%A Kenneth W. Regan
%A D. Sivakumar
%T Quasilinear Time Complexity Theory
%R 93-34
%D August 20, 1993
%I Department of Computer Science, SUNY Buffalo
%K Computational complexity, complexity classes, linear time, search vs. decision, error-correcting codes
%Y F.      Theory of Computation
 F.1.3   Complexity Classes (see also F.2)-Complexity hierarchies
 F.1.3   Complexity Classes (see also F.2)-Relations among complexity classes
%X This paper furthers the study of quasi-linear time complexity initiated
by Schnorr [J. ACM 25:136--145, 1976] and Gurevich and Shelah
[Proc. Logic at Botik '89, LNCS 363, pp108--118, 1989].
We show that the fundamental properties of the polynomial-time hierarchy
carry over to the quasilinear-time hierarchy.
Whereas all previously known versions of the Valiant-Vazirani
reduction from NP to parity run in quadratic time, we
give a new construction using error-correcting codes that
runs in quasilinear time.
We show, however, that the important equivalence between search problems
and decision problems in polynomial time is unlikely to carry
over: if search reduces to decision for SAT in quasi-linear time,
then all of NP is contained in quasi-polynomial time.
Other connections to work by Stearns and Hunt
[Math. Sys. Thy.  23:209--225, 1990] on ``power indices'' of
NP languages are made.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-37.ps.Z
%A Henry H. Hexmoor
%A Johan M. Lammens
%A Stuart C. Shapiro
%T An Autonomous Agent Architecture for Integrating "Unconscious" and "Conscious", Reasoned Behaviors
%R 93-37
%D August 24, 1993
%I Department of Computer Science, SUNY Buffalo
%K GLAIR, autonomous agents, color perception, learning behaviors
%Y I.2.0   General-Cognitive simulation
 I.2.4   Knowledge Representation Formalisms and Methods
 I.2.6   Learning (see also K.3.2)
 I.2.9   Robotics
 I.2.10   Vision and Scene Understanding (see also I.4.8,I.5)-Intensity, color, photometry and thresholding
%X We outline an autonomous agent architecture, GLAIR, and one of its
instantiations, the Air Battle Simulation game. Our architecture models
agents with ``conscious'' and ``unconscious'' behaviors. The architecture
provides for grounded symbolic representations through embodied perception,
and provides a mechanism for learning behaviors. We discuss how color
perception fits into the architecture as a particular case of grounding and
embodiment. We also discuss how the Air Battle Simulation implements an
autonomous agent conforming to the architecture, and how knowledge
migration and various other features of the architecture apply to it.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-36.ps.Z
%A Ashish V. Naik  
%A Alok Baveja 
%A Rajan Batta 
%A Jonathan P. Caulkins
%T Scheduling Crackdowns on Illicit Drug Markets
%R 93-36
%D August 30, 1993
%I Department of Computer Science, SUNY Buffalo
%K Crackdown Scheduling,  NP-completeness,  Approximation Algorithms
%Y F.2.2   Nonnumerical Algorithms and Problems (see also E.2-5,G.2,H.2-3)-Sequencing and scheduling 
 F.1.1   Models of Computation (see also F.4.1)-Computability theory
 E.5.1   Optimization
%X This paper presents an analytical approach for scheduling crackdowns 
on street-corner drug markets. The crackdown scheduling problem is 
shown  to be NP-Complete. A dynamic programming formulation is
presented with an exponential  time optimal algorithm. 
We then provide efficient optimal algorithms for several special cases
and approximation algorithms for the general case.
These  results show that the optimal strategy  is to give
priority to markets that take longer to bring down and which require low
levels of post-crackdown maintenance. The results are then extended to
incorporate dealer displacement between drug markets.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-23.ps.Z
%A Niloufer Mackey
%T Hamilton and Jacobi Meet Again:  Quaternions and the Eigenvalue Problem
%R 93-23
%D May 15, 1993
%I Department of Computer Science, SUNY Buffalo
%X The algebra isomorphism between M4(R)and H \tensor H, 
where H is the algebra of quaternions, has unexpected
computational payoff: it helps construct an orthogonal 
similarity that 2x2 block-diagonalizes a 4x4symmetric 
matrix.  Replacing plane rotations with these more 
powerful 4x4 rotations leads to a quaternion-Jacobi 
method in which the `weight' of 4 elements (in a 2x2 block)
is transferred all at once onto the diagonal.  Quadratic 
convergence sets in sooner, and the new method requires 
at least one fewer sweep than plane-Jacobi methods.  An
analogue of the sorting angle for plane rotations is 
developed for these 4x4 rotations. 
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-39.ps.Z
%A David B. Sher
%A Charlotte E. Wafford
%A Davin Milun
%T Relating Gibbs distributions to empirically derived marginal distributions for image analysis
%R 93-39
%D November 23, 1993
%I Department of Computer Science, SUNY Buffalo
%K Markov random field; Gibbs distribution; ICM; simulated annealing; MAP; pseudoinverse 
%Y I.2.10   Vision and Scene Understanding (see also I.4.8,I.5)
 I.2.6   Learning (see also K.3.2)-Parameter learning
%X The log marginal probability ratios of a Gibbs distribution over an 8
connected neighborhood system is a linear function of its 66 clique
weights.  We empirically determine log marginal probability ratios of
artificial noiseless images and use the pseudoinverse method to
compute the closest Gibbs distribution.  
We compare these Gibbs distributions to {\it ad hoc} distributions suggested
in the literature and to the empirical marginals from which they are
derived. 
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-02.ps.Z
%A Jin-Yi Cai
%A Ashish V. Naik
%A Alan L. Selman
%T On P-selective sets and Adaptive versus Nonadaptive Queries to NP
%R 94-02
%D February 02, 1994
%I Department of Computer Science, SUNY Buffalo
%K p-selective, NP truth-table, adaptive queries, nonadaptive queries
%Y F.1.1   Models of Computation (see also F.4.1)-Computability theory
 F.1.2   Modes of Computation-Alternation and nondetermination
 F.1.3   Complexity Classes (see also F.2)
 F.1.3   Complexity Classes (see also F.2)-Relations among complexity classes
%X We show that if there exists a p-selective set that is
NP-hard under truth-table reductions, then every function that is
computable in polynomial time by truth-table access to an NP oracle
is computable in polynomial time by making at most $O(\log n)$
queries to an NP oracle.  As a consequence, it follows that if there
exists a tt-hard p-selective set for NP, then for all
$k>0,~\np\subseteq DTIME[2^{n/\log^k n}]$.
  Reporting progress on the question of whether every function that is
computable in polynomial time by truth-table access to an NP oracle
is computable in polynomial time by making at most $O(\log n)$
queries to an NP oracle, we show that if there is a constant $k$ such that
\[ {\rm PF}_{{\mbox{$n^k$-tt}}}^{\rm NP} \subseteq 
{\rm PF}^{\rm NP}[k\lceil \log n \rceil -1],  \]
then P = NP.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-38.ps.Z
%A Sreejit Chakravarty
%T A Study of Theoretical Issues in the Synthesis of Delay Fault Testable Circuits
%R 93-38
%D October 26, 1993
%I Department of Computer Science, SUNY Buffalo
%K Delay Fault Testable Circuits; Logic Optimization; Logic Synthesis; Testability Enhancing Transformations; Testability Preserving Transformations
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-40.ps.Z
%A Bharat Jayaraman
%A Mauricio Osorio
%A Kyonghee Moon
%T Partial Order Logic Programming
%R 93-40
%D November 30, 1993
%I Department of Computer Science, SUNY Buffalo
%K lattices, partial orders, functional programming, logic programming, sets, monotonic functions, memoization, fixed-point computation
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-01.ps.Z
%A Syed S. Ali
%T A "Natural Logic" for Natural Language Processing and Knowledge Representation
%R 94-01
%D February 09, 1994
%I Department of Computer Science, SUNY Buffalo
%K Natural Language Processing, Knowledge Representation, Surface Reasoning, Subsumption, ANALOG
%Y I.2.0   General-Cognitive simulation
 I.2.3   Deduction and Theorem Proving-Deduction (e.g., natural, rule-based)
 I.2.4   Knowledge Representation Formalisms and Methods-Predicate logic
 I.2.4   Knowledge Representation Formalisms and Methods-Representation languages
 I.2.4   Knowledge Representation Formalisms and Methods-Semantic networks
 I.2.7   Natural Language Processing-Discourse
 I.2.7   Natural Language Processing-Language generation
 I.2.7   Natural Language Processing-Language parsing and understanding
%X We define a knowledge representation and inference formalism that is
well suited to natural language processing.  In this formalism every
subformula of a formula is closed.  We motivate this by observing that
any formal language with (potentially) open sentences is an
inappropriate medium for the representation of natural language
sentences.  Open sentences in such languages are a consequence of the
separation of variables from their quantifier and type constraints,
typically in the antecedents of rules.  This is inconsistent with the
use of descriptions and noun phrases corresponding to variables in
language.  Variables in natural language are constructions that are
typed and quantified as they are used. A consequence of this is that
variables in natural language may be freely reused in dialog. This
leads to the use of pronouns and discourse phenomena such as ellipsis
involving reuse of entire subformulas.  We present an augmentation to
the representation of variables so that variables are not atomic
terms. These ``structured'' variables are typed and quantified as they
are defined and used.  This leads to an extended, more ``natural''
logical language whose use and representations are consistent with the
use of variables in natural language.  Structured variables simplify
the tasks associated with natural language processing and generation,
by localizing noun phrase processing.
  The formalism is defined in terms of a propositional semantic network,
starting from nodes and arcs connecting nodes, subsumption, matching,
to inference.  It allows the resolution of some representational
difficulties with certain classes of natural language sentences (e.g.
the so-called ``donkey'' sentences and sentences involving branching
quantifiers).  Reuse phenomena, such as pronominalization and
ellipsis, are captured in the representation by structure-sharing.  A
major advantage of this structured representation of variables is that
it allows a form of terminological and derived subsumption similar to
surface reasoning in natural language.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-03.ps.Z
%A Sreejit Chakravarty
%A Yiming Gong
%T Voting Model Based Diagnosis of Bridging Faults in Combinational Circuits
%R 94-03
%D February 15, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-42.ps.Z
%A Sreejit Chakravarty 
%A Paul J. Thadikaran
%T On Iddq Measurement Based Analysis of Bridging Faults in CMOS Circuits
%R 93-42
%D November 1993
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-04.ps.Z
%A Deepak Kumar
%T From Beliefs and Goals to Intentions and Actions: An Amalgamated Model of Inference and Acting
%R 94-04
%D March 11, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-23.ps.Z
%A Henry Hexmoor, Donald Nute
%T Methods for deciding what to do next and learning
%R 92-23
%D September, 1992
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-06.ps.Z
%A Sreejit Chakravarty, Vinay Dabholkar
%T Minimizing Power Dissipation in Scan Circuits During Test Application
%R 94-06
%D April 15, 1994
%I Department of Computer Science, SUNY Buffalo
%K Power dissipation, full isolated scan, full integrated scan
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-07.ps.Z
%A Henry H. Hexmoor
%T What are routines good for?
%R 94-07
%D April 15, 1994
%I Department of Computer Science, SUNY Buffalo
%K We show how agents with routines a) use them to guide their everyday activity,  b) use them to enrich their abstract concepts about acts.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-08.ps.Z
%A Ishfaq Ahmad, Min-You Wu, Jaehyung Yang and Arif Ghafoor
%T A Performance Assessment of Express on the iPSC/2 and iPSC/860 Hypercube Computers
%R 94-08
%D April 15, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-09.ps.Z
%A Henry H. Hexmoor
%T A Methodology for Developing Competent Agents Without Sensor and Actuator Profusion
%R 94-09
%D April 15, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-12.ps.Z
%A Kannan Govindarajan, Bharat Jayaraman, Surya Mantha
%T Preference Logic Programming: Optimization as Inference
%R 94-12
%D April 15, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-10.ps.Z
%A V.P. Dabholkar, S. Chakravarty
%T Minimizing Power Dissipation in Combinational Circuits During Test Application
%R 94-10
%D April 15, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-23.ps.Z
%A Henry Hexmoor, Donald Nute
%T Methods for deciding what to do next and learning
%R 92-23
%D September, 1992
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-15.ps.Z
%A Jin-Yi Cai
%A Suresh Chari
%T On the Impossibility of Amplifying the Independence of Random Variables
%R 94-15
%D May 04, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-14.ps.Z
%A Jin-Yi Cai
%A Michael D. Hirsch
%T Rotation Distance, Triangulations of Planar Surfaces and Hyperbolic Geometry
%R 94-14
%D May 04, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-16.ps.Z
%A Jin-Yi Cai
%T Computing Jordan Normal Forms exactly for commuting matrices in polynomial time
%R 94-16
%D May 11, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-17.ps.Z
%A Jin-Yi Cai
%A Richard J. Lipton
%A Yechezkel Zalcstein
%T The complexity of the A B C problem resolved
%R 94-17
%D May 12, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-19.ps.Z
%A Frederic Green
%A Johannes Koebler
%A Kenneth W. Regan
%A Thomas Schwentick
%A Jacobo Toran
%T The Power of the Middle Bit of a #P Function
%R 94-19
%D May 20, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-23.ps.Z
%A Kenneth W. Regan
%T Linear-Time Algorithms in Memory Hierarchies
%R 94-23
%D May 20, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-24.ps.Z
%A Kenneth W. Regan
%T Linear Speed-Up, Information Vicinity, and Finite-State Machines
%R 94-24
%D May 20, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-21.ps.Z
%A Ashish V. Naik
%A Kenneth W. Regan
%A D. Sivakumar
%T Quasilinear Time Complexity Theory
%R 94-21
%D May 20, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-18.ps.Z
%A Kenneth W. Regan
%T Linear Time and Memory-Efficient Computation
%R 94-18
%D May 20, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-20.ps.Z
%A Lide Li
%A Mitsunori Ogihara
%A Kenneth W. Regan
%T On Information From #P Functions
%R 94-20
%D May 20, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-22.ps.Z
%A Jin-Yi Cai
%A Richard J. Lipton
%A Luc Longpre
%A Mitsunori Ogihara
%A Kenneth W. Regan
%A D. Sivakumar
%T Communication Complexity of Key Agreement on Limited Ranges
%R 94-22
%D May 20, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-26.ps.Z
%A Johan M. Lammens
%T A Computational Model of Color Perception and Color Naming
%R 94-26
%D June 24, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-29.ps.Z
%A Kenneth W. Regan
%T Index Sets and Presentations of Complexity Classes (revised version)
%R 94-29
%D July 29, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-30.ps.Z
%A Devashis Jana
%A Bharat Jayaraman
%T Set Constructors, Finite Sets, and Logical Semantics
%R 94-30
%D August 08, 1994
%I Department of Computer Science, SUNY Buffalo
%K set constructors, finite sets, set axioms, set unification, freeness, Herbrand models.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-43.ps.Z
%A Shriram Revankar
%T Supervised Image Segmentation
%R 93-43
%D 1993
%I Department of Computer Science, SUNY Buffalo
%Y J.3.0   Biology
 H.1.2   User/Machine
 I.4.6   Segmentation
 I.4.7   Feature Measurement
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-34.ps.Z
%A Yiming Gong
%A Sreejit Chakravarty
%T A Diagnosis Algorithm for Bridging Faults in Combinational Circuits
%R 94-34
%D September 13, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-38.ps.Z
%A Jin-yi Cai
%A Pu Cai
%A Yixin Zhu
%T A fully polynomial time approximation scheme in scheduling deteriorating jobs
%R 94-38
%D October 02, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-28.ps.Z
%A William J. Rappaport
%T Understanding Understanding:  Syntactic Semantics and Computational Cognition
%R 94-28
%D October 20, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-33.ps.Z
%A Krishna Moorty Sivalingam
%T High-Speed Communication Protocols for All-Optical Wavelength Division Multiplexed Computer Networks
%R 94-33
%D October 20, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-27.ps.Z
%A Kannan Govindarajan
%A Bharat Jayaraman
%A Surya Mantha
%T Preference Logic Grammars
%R 94-27
%D June 24, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-13.ps.Z
%A Kannan Govindarajan
%A Bharat Jayaraman
%T Intensional AlgorithmicDebugging
%R 94-13
%D May 20, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-25.ps.Z
%A Wei Shu
%T Runtime Incremental Parallel Scheduling (RIPS) on Distributed Memory Computers
%R 94-25
%D November 11, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-32.ps.Z
%A Amruth N. Kumar
%T Component Ontological Representation of Function For Candidate Discrimination in Model Based Diagnosis
%R 94-32
%D November 11, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-37.ps.Z
%A Devashis Jana
%T Semantics of Subset-Logic Languages
%R 94-37
%D November 11, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-39.ps.Z
%A Ronald Sanger Curtis
%T Data Structure Complexity Metrics
%R 94-39
%D November 11, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-27.ps.Z
%A Russ Miller
%T The Status of Parallel Processing Education
%R 93-27
%D July, 1993 (updated subsequently)
%I Department of Computer Science, SUNY Buffalo
%K Parallel Processing, Education
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-06.ps.Z
%A Xin He
%T Grid Embedding of Internally Triangulated Plane Graphs without Non-empty Triangles
%R 95-06
%D February 02, 1995
%I Department of Computer Science, SUNY Buffalo
%K planar graphs, graph drawing
%X    A straight line grid embedding of a plane graph G  is a drawing of G
such that the vertices are drawn at grid points and the edges are drawn
as non-intersecting straight line segments. In this paper, we show that,
if an internally triangulated plane graph G has no non-empty triangles
(a non-empty triangle is a triangle of G containing some vertices in its
interior), then G can be embedded on a grid of size W x H such that
W+H <= n, W  <=(n+3)/2 and H <= 2(n-1)/3, where n is the number of
vertices of G. Such an embedding can be computed in linear time.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/90-23.txt.Z
%A Xin He
%T An Improved Algorithm for the Planar 3-Cut Problem
%R 90-23
%D February, 1990
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/90-26.txt.Z
%A R. sanath
%A X. He
%T Efficient Parallel Algorithms for Selection and Searching on Sorted Matrices
%R 90-26
%D February, 1990
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%A C.-S. Chang, G.T. DeTitta, H.A. Hauptman, R. Miller, P. Thuman, and C.M. Weeks
%T Using Parallel Computers to Solve the Phase Problem of X-Ray Crystallography
%R 92-15
%D 1992 (not available, final version appears in The International Journal of Supercomputer Applications, vol 7, no. 1, pp. 25-49)
%I Department of Computer Science, SUNY Buffalo
%K X-Ray Crystallography, Parallel Computing
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-01.ps.Z
%A Aidong Zhang
%A Marian Nodine
%A Bharat Bhargava
%T Ensuring Semi-Atomicity in Heterogeneous Distributed Database Systems
%R 95-01
%D February 04, 1995
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-03.ps.Z
%A Arun K. Jagota
%A Giri Narasimhan
%A Kenneth W. Regan
%T Information Capacity of Binary Weights Associative Memories
%R 95-03
%D January 24, 1995
%I Department of Computer Science, SUNY Buffalo
%X We study the amount of information stored in the fixed points of
randominstances of two binary weights associative memory models:
the Willshaw Model (WM) and the Inverted Neural Network (INN).  For
these models, we show divergences between the information capacity (IC)
as defined by Abu-Mostafa and Jacques, and information calculated from
the standard notion of storage capacity by Palm and Grossman,
respectively.  We prove that the WM has asymptotically optimal IC for
nearly the full range of threshold values, the INN likewise for constant
threshold values, and both over all degrees of sparseness of the stored
vectors.  This is contrasted with the result by Palm, which required
stored random vectors to be logarithmically sparse to achieve good
storage capacity for the WM, and with that of Grossman, which showed
that the INN has poor storage capacity for random vectors.
We propose Q-state versions of the WM and the INN, and show that they
retain asymptotically optimal IC while guaranteeing stable storage.
By contrast, we show that the Q-state INN has poor storage capacity for
random vectors.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/90-13.ps.Z
%A William J. Rapaport
%T Computer Processes and Virtual Persons:  Comments on Cole's "Artificial Intelligence and Personal Identity"
%R 90-13
%D May 1990
%I Department of Computer Science, SUNY Buffalo
%K computer processes, virtual persons, John Searle, Chinese-Room argument, artificial intelligence, cognitive science
%Y I.2     ARTIFICIAL INTELLIGENCE
 I.2.0   General-Cognitive simulation
 I.2.0   General-Philosophical foundations
 I.2.1   Applications and Expert Systems (see also H.4,J)-Natural language interfaces
 I.2.4   Knowledge Representation Formalisms and Methods-Semantic networks
 I.2.7   Natural Language Processing
 I.2.7   Natural Language Processing-Language parsing and understanding
%X This is a draft of the written version of comments on a paper by David Cole,
presented orally at the American Philosophical Association Central Division
meeting in New Orleans, 27 April 1990.  Following the written comments are
2 appendices:  One contains a letter to Cole updating these comments.
The other is the handout from the oral presentation.
  In general, I am sympathetic to Cole's arguments; my comments seek to
clarify and extend the issues.  Specifically, I argue that, in Searle's
celebrated Chinese Room Argument, Searle-in-the-room does understand
Chinese, in spite of his claims to the contrary.  He does this in the
sense that he is executing a computer ``process'' that can be said to
understand Chinese.  (The argument that the process in fact does
understand Chinese is made elsewhere; here, I merely assume that
*if* anything understands Chinese, it is a ``process'' executed by
Searle-in-the-room.)  I also show, by drawing an analogy
between the way that I add numbers in my head and the way that a
calculator adds numbers, that Searle-in-the-room's claim that he does
not understand Chinese does not contradict the fact that, by executing the
Chinese-natural-language-understanding algorithm, he does understand Chinese.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%A Stuart C. Shapiro
%A William J. Rapaport
%T The SNePS Family
%R 90-21
%D September 1990
%I Department of Computer Science, SUNY Buffalo
%K SNePS, semantic networks
%Y I.2     ARTIFICIAL INTELLIGENCE
 I.2.0   General-Cognitive simulation
 I.2.0   General-Philosophical foundations
 I.2.1   Applications and Expert Systems (see also H.4,J)-Natural language interfaces
 I.2.3   Deduction and Theorem Proving-Deduction (e.g., natural, rule-based)
 I.2.3   Deduction and Theorem Proving-Nonmonotonic reasoning and belief revision
 I.2.4   Knowledge Representation Formalisms and Methods
 I.2.4   Knowledge Representation Formalisms and Methods-Representation languages
 I.2.4   Knowledge Representation Formalisms and Methods-Semantic networks
%X SNePS, the Semantic Network Processing System, has been designed to be
a system for representing the beliefs of a natural language using
intelligent system (a ``cognitive agent'').  It has always been the
intention that a SNePS-based
``knowledge base'' would ultimately be built, not by a programmer or
knowledge engineer entering representations of knowledge in some formal
language or data entry system, but by a human informing it using a
natural language (generally supposed to be English), or by the system
reading books or articles that had been
prepared for human readers.
  This document describes the history of SNePS and some recent SNePS
projects.
  It has been superseded by:
Shapiro, Stuart C., & Rapaport, William J. (1992), ``The SNePS Family,''
_Computers and Mathematics with Applications_, Vol. 23:  243-275.
Reprinted in Fritz Lehmann (ed.), _Semantic Networks in Artificial
Intelligence_ (Oxford:  Pergamon Press, 1992):  243-275.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-02.ps.Z
%A Kenneth W. Regan
%A D. Sivakumar
%A Jin-Yi Cai
%T Pseudorandom Generators, Measure Theory, and Natural Proofs
%R 95-02
%D January 25, 1995
%I Department of Computer Science, SUNY Buffalo
%X This paper proves that if strong pseudorandom number generators or
one-way functions exist, then the class of languages that have
polynomial-sized circuits is not small within exponential
time, in terms of the resource-bounded measure theory of Lutz.
More precisely, if for some e > 0 there exist nonuniformly
2^{n^e}}-hard PSRGs, as is widely believed, then the complexity class
P/poly does not have measure zero in EXP.
Our results establish connections between the measure theory and
the "natural proofs" of Razborov and Rudich.  Our work is also
motivated by Lutz's hypothesis that NP does not have measure zero
in EXP; obtaining our results with NP in place of
P/poly would show much more far-reaching consequences from the
existence of PSRGs than are currently known.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-08.ps.Z
%A Kenneth W. Regan
%A D.Sivakumar
%T Improved Resource-Bounded Borel-Cantelli and Stochasticity Theorems
%R 95-08
%D February 16, 1995
%I Department of Computer Science, SUNY Buffalo
%K Resource-bounded measure, Borel-Cantelli Lemma, martingales, stochasticity
%X This note strengthens and simplifies Lutz's resource-bounded
version of the Borel-Cantelli lemma for density systems and
martingales.  
We observe that the technique can be used to construct martingales
that are ``additively honest,''
and also martingales that are ``multiplicatively honest.''
We use this to improve
the ``Weak Stochasticity Theorem'' of Lutz and Mayordomo:
their result does not address the issue of how rapidly the
bias away from 1/2 converges toward zero in a ``stochastic'' language,
while we show that the bias must vanish exponentially.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-07.ps.Z
%A Vinay Dabholkar
%A Sreejit Chakravarty
%A Farid Najm
%A Janak Patel
%T Cyclic Stress Tests for Full Scan Circuits
%R 95-07
%D February 24, 1995
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-10.ps.Z
%A Marek Zaionc
%T Lambda Representation of Operations Between Different Term Algebras
%R 95-10
%D February 28, 1995
%I Department of Computer Science, SUNY Buffalo
%K lambda definability
%Y F.4.1   Mathematical Logic (see also F.1.1,I.2.2-3)-Lambda calculus and related systems
 F.4.1   Mathematical Logic (see also F.1.1,I.2.2-3)-Recursive function theory
%X There is a natural isomorphism identifying second order types of the
simple typed $\lambda$ calculus with free homogeneous term algebras. Let
$\tau^A$ and $\tau^B$ be types representing algebras $A$ and $B$
respectively. Any closed term of the type $\tau^A \to \tau^B$ represents a
computable function between algebras $A$ and $B$. The problem investigated
in the paper is to find and characterize the set of all $\lambda$
definable functions between structures $A$ and $B$. The problem is
presented in a more general setting. If algebras $ A_1 ,...,A_n ,B$ are
represented respectively by second order types 
             $\tau^{A_1} ,...,\tau^{A_n},\tau^B$ 
then $\tau^{A_1} \to (...( \tau^{A_n} \to \tau^B )...)$ is a type
of functions from the product $A_1 \times ... \times A_n$ into algebra 
$B$. Any closed term of this type is a representation of algorithm which
transforms the tuple of terms of types $\tau^{A_1} ,...,\tau^{A_n}$
respectively into a term of type $\tau^B$, which represents an object in
algebra $ B$ (see [B\"{o}B85]). The problem investigated in the paper is
to find an effective computational characteristic of the $\lambda$
definable functions between arbitrary free algebras and the expressiveness
of such transformations.   As an example we will consider $\lambda$
definability between well known free structures such as: numbers, words
and trees.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-11.dvi.Z
%A Sreejit Chakravarty
%A Kent Fuchs
%A Janak Patel
%T Evaluation and Generation of IDDQ Diagnostic Tests for Bridging Faults in Combinational Circuits
%R 95-11
%D February 27, 1995
%I Department of Computer Science, SUNY Buffalo
%X Diagnosis is the process of identifying the fault in a faulty chip.
We assume that IDDQ measurement is used during diagnosis.
A novel algorithm for evaluating the effectiveness of a set of input vectors
to diagnose bridging faults in combinational circuits is presented.  
We also present a simulation based test generator for computing
IDDQ tests for diagnosing bridging faults.
Experimental evaluation of the algorithms are presented. The results obtained
are very encouraging.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-15.ps.Z
%A Karen Ehrlich
%A William J. Rapaport
%T A Computational Theory of Vocabulary Expansion:  Project Proposal
%R 95-15
%D March 21, 1995
%I Department of Computer Science, SUNY Buffalo
%K artificial intelligence, computational linguistics, natural-language understanding lexical acquisition, semantics, semantic networks, machine learning
%Y I.2     ARTIFICIAL INTELLIGENCE
 I.2.0   General-Cognitive simulation
 I.2.1   Applications and Expert Systems (see also H.4,J)-Natural language interfaces
 I.2.3   Deduction and Theorem Proving-Nonmonotonic reasoning and belief revision
 I.2.4   Knowledge Representation Formalisms and Methods-Semantic networks
 I.2.6   Learning (see also K.3.2)-Concept learning
 I.2.7   Natural Language Processing-Language parsing and understanding
%X This project concerns the development and implementation of a
computational theory of how human readers and other
natural-language-understanding systems can automatically expand their
vocabulary by determining the meaning of a word from context.  The word
might be unknown to the reader, familiar but misunderstood, or familiar
but being used in a new sense.  `Context' includes the prior and
immediately surrounding text, grammatical information, and the reader's
background knowledge, but no access to a dictionary or other external
source of information (including a human).  The fundamental thesis is
that  the meaning of such a word (1) *can* be determined from context,
(2) can be *revised* and refined upon further encounters with the word,
(3) ``*converges*'' to a dictionary-like definition if enough context
has been provided and there have been enough exposures to the word, and
(4) eventually ``*settles down*'' to a ``steady state'', which,
however, is always subject to revision upon further encounters with the
word.  The system is being implemented in the SNePS-2.1
knowledge-representation and reasoning system, which provides a
software laboratory for testing and experimenting with the theory.
This research is a component of an interdisciplinary, cognitive-science
project to develop a computational cognitive model of a reader of
narrative text.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-14.ps.Z
%A B. Jayaraman and K. Moon
%T Implementation of Subset Logic Programs
%R 95-14
%D March 24, 1995
%I Department of Computer Science, SUNY Buffalo
%K subset and relational clauses, sets, set matching and unification, memo tables, monotonic aggregation, lazy evaluation, abstract machine, instruction set, performance analysis
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-16.ps.Z
%A Jin-Yi Cai
%A Alan L. Selman
%T Average Time Complexity Classes
%R 95-16
%D March 31, 1995
%I Department of Computer Science, SUNY Buffalo
%K computational complexity, average time complexity classes, hierarchy, Average-P, logarithmico-exponentia
%Y F.1.3   Complexity Classes (see also F.2)-Complexity hierarchies
%X We extend Levin's theory of average polynomial time to arbitrary
time bounds and prove that average time complexity classes form
as fine a hierarchy as do deterministic time complexity classes.  
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-17.ps.Z
%A Pu Cai
%A Jin-yi Cai
%A Ashish Naik
%T Efficient algorithms for a scheduling problem and its applications to illicit drug market crackdowns
%R 95-17
%D April 03, 1995
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-11.ps.Z
%A Robin K. Hill
%T Issues of Semantics in a Semantic-Network representation of Belief
%R 94-11
%D April 03, 1995
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-18.ps.Z
%A Yiming Gong
%A Sreejit Chakravarty
%T A Dynamic Diagnostic Test Generation System for IDDQ Measurement Based Diagnosis of Bridging Faults
%R 95-18
%D April 10, 1995
%I Department of Computer Science, SUNY Buffalo
%K VLSI, Testing, Diagnosis, Test Generation, IDDQ, Bridging Fault
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-20.ps.Z
%A Marek Zaionc
%T Fixpoint Technique for Counting Terms in Typed Lambda Calculus
%R 95-20
%D April 14, 1995
%I Department of Computer Science, SUNY Buffalo
%K typed lambda calculus, Curry - Howard isomorphism
%Y F.4.1   Mathematical Logic (see also F.1.1,I.2.2-3)-Lambda calculus and related systems
%X Typed lambda calculus with denumerable set of ground types is
considered. The aim of the paper is to show procedure for counting closed
terms in long normal forms. It is shown that there is a surprising
correspondence between the number of closed terms and the fixpoint
solution of the polynomial equation in some complete lattice. It is proved
that counting of terms in typed lambda calculus can be reduced to the
problem of finding least fixpoint for the system of polynomial equations.
The algorithm for finding the least fixpoint of the system of polynomials
is considered. By the well known Curry Howard isomorphism the result can
be interpreted as a method for counting proofs in the implicational
fragment of the propositional intuitionistic logic. The problem of number
of terms was studied but never published by Ben - Yelles and Roger Hindlay.
Similarly Hirokawa proved that complexity of the question whether given
type ( formula ) possess an infinite number of normal terms (proofs) is
polynomial space complete. 
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-22.ps.Z
%A Kannan Govindarajan
%A Bharat Jayaraman
%A Surya Mantha
%T Relaxation in Constraint Logic Languages
%R 95-22
%D April 25, 1995
%I Department of Computer Science, SUNY Buffalo
%K constraints, preferences, constraint hierarchies, logic programming, optimization, relaxation, model-theoretic and operational semantics, modal logic, soundness and completeness
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-21.ps.Z
%A Rajiv Chopra
%A Rohini Srihari
%A Anthony Ralston
%T HyperArc Consistency and Expensive Constraints
%R 95-21
%D May 05, 1995
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-24.ps.Z
%A Marek Zaionc
%T Lambda definability is decidable for second order types and for regular third order types
%R 95-24
%D May 08, 1995
%I Department of Computer Science, SUNY Buffalo
%K lambda definability
%Y F.4.1   Mathematical Logic (see also F.1.1,I.2.2-3)-Lambda calculus and related systems
%X It has been proved by Loader that Statman-Plotkin conjecture
fails. It means that it is undecidable to determine whether or not the
given function in the full type hierarchy is lambda definable. The Loader
proof was done by encoding the word problem in the full type hierarchy
based on the domain with 7 elements. The aim of this paper is to show that
the lambda definability problem limited for second order types and regular
third order types is decidable in any finite domain. Obviously lambda
definability is decidable for 0 and 1 order types. As an additional effect
of the result described we may observe that for certain types there is no
finite grammar generating all closed terms. 
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/92-16.dvi.Z
%A Sreejit  Chakravarty
%A Paul J. Thadikaran
%T Generation and Simulation of IDDQ Tests for Bridging and Leakage Faults in Combinational Circuits
%R 92-16
%D 1992
%I Department of Computer Science, SUNY Buffalo
%X In the absence of information about the layout and for better defect coverage
test generation and fault simulation systems must target all bridging faults.
The usefulness of targeting a subset of such faults, viz. all two line 
bridging faults, is based on the fact that an IDDQ Test Set that detects 
all two line bridging faults also detects all multiple line, single cluster 
bridging faults.
  A novel algorithm for simulating IDDQ Tests for all two-line
bridging faults in combinational circuits is presented. The algorithm uses
an implicit representation of the fault list thereby resulting in a
time and space efficient algorithm.
  We show that the problem of computing IDDQ Tests for a two line bridging fault
as well as for a leakage fault, in some restricted classes of circuits, is 
intractable. Next, experimental results on using randomly generated IDDQ Test 
Sets for detecting bridging faults are presented. These results point to the 
computational feasibility of targeting all two line bridging fault in 
combinational circuits.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-25.ps.Z
%A David R. Koepsell
%A William J. Rapaport
%T The Ontology of Cyberspace:  Questions and Comments
%R 95-25
%D April 22, 1995
%I Department of Computer Science, SUNY Buffalo
%K ontology, cyberspace, philosophical issues, copyright, patent, legal issues
%Y D.2.9   Management-Copyrights
 I.2.0   General-Philosophical foundations
 K.5     LEGAL ASPECTS OF COMPUTING
 K.5.1   Software Protection-Copyrights
 K.5.1   Software Protection-Patents
 K.5.1   Software Protection-Trade secrets
 K.5.m   Miscellaneous-Hardware patents
%X This document consists of two papers: "The Ontology of Cyberspace:
Preliminary Questions", by David R. Koepsell, and "Comments on
`The Ontology of Cyberspace'," by William J. Rapaport.
They were originally presented at the Tri-State Philosophical
Association Meeting, St. Bonaventure University, 22 April 1995.
This document is SUNY Buffalo Department of Computer Science
Technical Report 95-25 and SUNY Buffalo Center for Cognitive
Science Technical Report 95-09.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/93-41.ps.Z
%A Jin-yi Cai
%A W.H.J. Fuchs
%A Dexter Kozen
%A Zicheng Liu
%T Efficient Average-Case Algorithms for the Modular Group
%R 93-41
%D June 15, 1995 (modified version)
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-27.ps.Z
%A Jin-yi Cai
%A Zicheng Liu
%T The bounded membership problem of the monoid SL_2(N)
%R 95-27
%D June 15, 1995
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-29.ps.Z
%A L. Babai
%A R. Beals
%A J-Y. Cai
%A G. Ivanyos
%A E. Luks
%T Multiplicative equations over commutative matrices
%R 95-29
%D July 13, 1995
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-30.ps.Z
%A Jin-Yi Cai
%A D. Sivakumar
%T The Resolution of a Hartmanis Conjecture
%R 95-30
%D July 13, 1995
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-31.ps.Z
%A Jin-Yi Cai
%A Ashish V. Naik
%A D. Sivakumar
%T On the Existence of Hard Sparse Sets under Weak Reductions
%R 95-31
%D July 13, 1995
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-23.ps.Z
%A Davin Milun
%T Generating Markov Random Field Image Analysis Systems from Examples
%R 95-23
%D May 1, 1995
%I Department of Computer Science, SUNY Buffalo
%K MRF ; pattern recognition ; edge detection ; low level computer vision 
%Y I.5.2   Design Methodology
 I.2.6   Learning (see also K.3.2)-Parameter learning
%X This thesis develops image interpretation systems from training sets of
images.  The system learns Markov random field parameters by sampling from
this training set.  Specialized MRFs can model both the image and the noise
processes.  The method has been tested in the domains of binary image
reconstruction, edge detector enhancement and edge detection.
  Capturing the noise process within the MRF parameters is achieved by
sampling neighborhoods in the original images together with the
corresponding neighborhoods in the noisy images, and so creating a
probability density function over pairs of neighborhoods across both
images.  This technique is that of Double Neighborhood MRF (DNMRF).  This
technique can even capture the noise model in situations where the noise is
correlated.
  The task that the DNMRF method performs, and the domain in which it
operates, can be changed by simply providing an appropriate set of training
images.  The collection of domains in which this system will work are those
problems where the task to be performed is localized in nature, has a binary
desired result, and has an available set of training images.
  The required size of the training set is studied.  For standard MRFs, only
16 images of size 64x64 are required to be sampled.  The performance
of DNMRFs continue to improve even as the sample size increases about 256
images.
  The method uses optimization to interpret images.  The optimizers used are
the ICM gradient descent method, and simulated annealing.  The sparseness
of sampled MRF distributions is ameliorated by a technique developed by
Hancock and Kittler.
  Of note are the domains of edge detection and edge detector enhancement of
JPEG compressed images.  In the case of edge detection the DNMRF method,
when operating on binary images, performed as well as, or even better than,
other well known edge detectors, such as Sobel and Canny.  In the case of
edge enhancement of JPEG compressed images, the method greatly improved the
output of the edge detector.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-35.ps.Z
%A Debashish Niyogi
%T A Knowledge-Based Approach to Deriving Logical Structure from Document Images
%R 94-35
%D August 1994
%I Department of Computer Science, SUNY Buffalo
%K Document Understanding ; Image Analysis ; Artificial Intelligence ; Pattern Recognition Layout Analysis ; Digital Libraries ; Logic Programming ; Knowledge-Based Systems 
%X An important application of artificial intelligence is document image
understanding, specifically the analysis of document images to derive a
symbolic description of the document structure and contents.  This
requires the segmentation of the different blocks of printed matter
using standard image processing techniques, and the use of spatial
domain knowledge to first classify these blocks (e.g., text paragraphs,
photographs, etc.), then group these blocks into logical units (e.g.,
newspaper stories, magazine articles, etc.), and finally determine the
reading order of the text blocks within each logical unit.  The above
steps describe the problem of converting the physical structure of the
document into its logical structure with the use of domain knowledge
about document layout.  The objective of this work is to develop a
computational model for the derivation of the logical structure of
documents using certain formalisms designed for this task.  In this
model, a simple yet powerful rule-based control strategy utilizes the
data obtained from the invocation of different types of operations on a
digitized document image, and makes inferences about the document using
a knowledge base of document layout rules.  The domain knowledge about
document structure is represented in a unified multi-level hierarchical
form, and is used by reasoning processes to make inferences.  The main
issues investigated in this research are: the kinds of domain and
control knowledge that are required, how to represent this knowledge in
a globally integrated form, and how to devise a control strategy that
efficiently utilizes this knowledge.  A knowledge-based document
logical structure derivation system (DeLoS) has been developed based on
this model. The system consists of a hierarchical rule-based control
system that guides the block classification, block grouping and
read-ordering operations; a global data structure that stores the
document image data and incremental inferences; and a domain knowledge
base that encodes the rules governing document layout.  Applications of
this approach include use in digital libraries for the retrieval of
relevant logical document information, as well as in comprehensive
document understanding systems that can read document text and allow
interactive querying of the syntactic and semantic information in a
document.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-12.ps.Z
%A Aidong Zhang
%A Biao Cheng
%A Raj Acharya
%T Texture-Based Image Retrival Using Fractals
%R 95-12
%D July 21, 1995
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-35.ps.Z
%A Jin-Yi Cai
%T A simple improvement of a theorem of Polya
%R 95-35
%D August 10, 1995
%I Department of Computer Science, SUNY Buffalo
%K quadratic residues
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-36.ps.Z
%A Jin-Yi Cai
%T Frobenius's degree formula and Toda's polynomials
%R 95-36
%D August 10, 1995
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-34.ps.Z
%A Min-You Wu
%T On Runtime Parallel Scheduling
%R 95-34
%D August 10, 1995
%I Department of Computer Science, SUNY Buffalo
%X Parallel scheduling is a new approach for load balancing.
In parallel scheduling, all processors cooperate together
to schedule work.
Parallel scheduling is able to accurately balance the load by using 
global load information at compile-time or runtime.
It provides a high-quality load balancing.
This paper presents an overview of the parallel scheduling technique.
Particular scheduling algorithms for tree, hypercube, and mesh networks
are presented. 
These algorithms can fully balance the load and maximize locality
at runtime.
Communication costs are significantly reduced compared to
other existing algorithms.
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-31.ps.Z
%A Aidong Zhang
%T Impact of Multimedia Data on Workflow
%R 94-31
%D August 16, 1995
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/94-43.ps.Z
%A Aidong Zhang
%A Marian Nodine
%A Bharat Bhargava
%T Ensuring Semi-Atomicity in Heterogeneous Distributed Database Systems
%R 94-43
%D August 16, 1995
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-05.ps.Z
%A Aidong Zhang
%A Biao Cheng
%A Raj Acharya
%T Using Fractal Coding to Index Image Content for a Digital Library
%R 95-05
%D August 16, 1995
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-19.ps.Z
%A Aidong Zhang
%A Biao Cheng
%A Raj Acharya
%T Texture-Based Image Retrieval Using Fractal Codes
%R 95-19
%D August 16, 1995
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-33.ps.Z
%A Aidong Zhang
%T On Synchronized Presentation Management in Multimedia Database Systems
%R 95-33
%D August 16, 1995
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-37.ps.Z
%A Kannan Govindarajan
%A Bharat Jayaraman
%A Surya Mantha
%T Optimization and Relaxation in Constraint Logic Languages
%R 95-37
%D August 21, 1995
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-39.ps.Z
%A K.W. Regan
%A H. Vollmer
%T Gap Languages and Log-Time Complexity Classes
%R 95-39
%D September 12, 1995
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-40.ps.Z
%A Jin-Yi Cai
%A D. Sivakumar
%T Resolution of Hartmanis' Conjecture for NL-hard sparse sets
%R 95-40
%D September 18, 1995
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-41.ps.Z
%A Jin-Yi Cai
%A Mitsunori Ogihara
%T Sparse Sets versus Complexity Classes
%R 95-41
%D September 18, 1995
%I Department of Computer Science, SUNY Buffalo
%Z Tue, 19 Sep 1995 15:03:53 GMT

%U http://www.cs.buffalo.edu/pub/tech-reports/95-43.ps.Z
%A S. Ravikumar
%A D. Sivakumar
%T On Self-Testing without the Generator Bottleneck
%R 95-43
%D September 20, 1995
%I Department of Computer Science, SUNY Buffalo
%X Suppose P is a program designed to compute a linear function f on a 
group G.  The task of "self-testing" f, that is, testing if P computes 
f correctly on most inputs, usually involves checking if P computes f 
correctly on all the generators of G.  Recently, F. Ergun presented 
self-testers that avoid this "generator bottleneck" for specific 
functions. In this paper, we generalize Ergun's results, and extend 
them to a much larger class of functions Our results give efficient 
self-testers for polynomial differentiation, integration, arithmetic 
in large finite field extensions, and constant-degree polynomials over 
large rings.
%Z Thu, 21 Sep 1995 17:10:00 GMT

%U http://www.cs.buffalo.edu/pub/tech-reports/95-42.ps.Z
%A Jin-Yi Cai, Ashish V. Naik, D. Sivakumar
%T Bounded Truth Table Reductions of P
%R 95-42
%D Sept. 21, 1994
%I Department of Computer Science, SUNY Buffalo
%Z Fri, 22 Sep 1995 03:00:12 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-38.ps.Z
%A Giovanni Seni
%T Large Vocabulary Recognition of On-line Handwritten Cursive Words
%R 95-38
%D August 1, 1995
%I Department of Computer Science, SUNY Buffalo
%K Handwriting recognition, Pen computing, Neural network applications
%X /u5/grads/seni/abstract.tex
%Z Mon, 16 Oct 1995 16:25:34 GMT

%U ftp://ftp.cs.buffalo.edu/pub/tech-reports/95-45.ps.Z
%A Petr Slavik
%T Improved Performance of the Greedy Algorithm for the Minimum Set Cover and Minimum Partial Cover Problems
%R 95-45
%D October 15, 1995
%I Department of Computer Science, SUNY Buffalo
%K Approximation Algorithms, Combinatorial Optimization, Greedy Algorithm.
%X We establish significantly improved bounds on the performance of the greedy  
algorithm for approximating MINIMUM SET COVER and MINIMUM PARTIAL COVER.  
Our improvements result from a new approach to both problems. In particular,  
   (a) we improve the known bound on the performance ratio of the greedy  
algorithm for general covers without cost, showing that it differs from  
the classical harmonic bound by a function which approaches infinity (as  
the size of the base set increases);  
   (b) we show that, for covers without cost, the performance guarantee  
for the greedy algorithm is significantly better than the performance  
guarantee for the randomized rounding algorithm;  
   (c) we prove that the classical bounds on the performance of the greedy  
algorithm for complete covers with costs are valid for partial covers as well,  
thus lowering, by more than a factor of two, the previously known estimate.
%Z Wed, 18 Oct 1995 13:48:31 GMT

