Hostname: page-component-586b7cd67f-vdxz6 Total loading time: 0 Render date: 2024-11-23T23:35:28.116Z Has data issue: false hasContentIssue false

Unified bounds for the independence number of graphs

Published online by Cambridge University Press:  11 December 2023

Jiang Zhou*
Affiliation:
College of Mathematical Sciences, Harbin Engineering University, Harbin, PR China
Rights & Permissions [Opens in a new window]

Abstract

The Hoffman ratio bound, Lovász theta function, and Schrijver theta function are classical upper bounds for the independence number of graphs, which are useful in graph theory, extremal combinatorics, and information theory. By using generalized inverses and eigenvalues of graph matrices, we give bounds for independence sets and the independence number of graphs. Our bounds unify the Lovász theta function, Schrijver theta function, and Hoffman-type bounds, and we obtain the necessary and sufficient conditions of graphs attaining these bounds. Our work leads to some simple structural and spectral conditions for determining a maximum independent set, the independence number, the Shannon capacity, and the Lovász theta function of a graph.

Type
Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of The Canadian Mathematical Society

1 Introduction

The independence number $\alpha (G)$ of graph G is the maximum size of independent sets in G, which is an important graph parameter in graph theory. The following is a celebrated upper bound given by Hoffman.

Theorem 1.1 [Reference Brouwer and Haemers5, Theorem 3.5.2]

Let G be a k-regular ( $k\neq 0$ ) graph with n vertices, and let $\tau $ be the minimum adjacency eigenvalue of G. Then

$$ \begin{align*} \alpha(G)\leq n\frac{|\tau|}{k-\tau}. \end{align*} $$

If an independent set C meets this bound, then every vertex not in C is adjacent to exactly $|\tau |$ vertices of C.

The upper bound in Theorem 1.1 is often called Hoffman (ratio) bound for $\alpha (G)$ . Haemers generalized the Hoffman bound to the general case.

Theorem 1.2 [Reference Haemers17, p. 17]

Let G be a graph with minimum degree $\delta $ , and let $\lambda $ and $\tau $ be the maximum and the minimum adjacency eigenvalue of G, respectively. Then

$$ \begin{align*} \alpha(G)\leq n\frac{\lambda|\tau|}{\delta^2-\lambda\tau}. \end{align*} $$

There are many Hoffman-type ratio bounds, which are useful in graph theory, coding theory, and extremal combinatorics [Reference Delsarte8Reference Ellis, Friedgut and Pilpel10, Reference Frankl and Wilson12, Reference Friedgut13]. Some generalized Hoffman ratio bounds involving adjacency eigenvalues, Laplacian eigenvalues, and normalized Laplacian eigenvalues of graphs can be found in [Reference Godsil and Newman14, Reference Haemers17, Reference Harant and Richter18, Reference Lu, Liu and Tian20].

For a graph $G=(V,E)$ , let $G^k$ denote the graph whose vertex set is $V^k$ , in which two vertices $u_1\ldots u_k$ and $v_1\ldots v_k$ are adjacent if and only if for each $i\in \{1,\ldots ,k\}$ either $u_i=v_i$ or $\{u_i,v_i\}\in E$ (see [Reference Alon1]). The Shannon capacity [Reference Brouwer and Haemers5, Reference Shannon23] of G is defined as

$$ \begin{align*} \Theta(G)=\lim_{k\rightarrow\infty}\alpha(G^k)^{1/k}=\sup_{k\rightarrow\infty}\alpha(G^k)^{1/k}, \end{align*} $$

and represents the number of distinct messages per use the channel can communicate with no error while used many times [Reference Alon1]. It is difficult to compute $\Theta (G)$ for a general graph G, we do not even know the Shannon capacity of the cycle $C_7$ (see [Reference Bohman4, Reference Polak and Schrijver21]).

The Lovász theta function $\vartheta (G)$ is an upper bound for $\alpha (G)$ introduced in [Reference Lovász19], which is a powerful tool for studying the Shannon capacity of graphs. Lovász [Reference Lovász19] proved that $\alpha (G)\leq \Theta (G)\leq \vartheta (G)$ , and posed the problem of finding graphs with $\Theta (G)=\vartheta (G)$ (see Problem 1 in [Reference Lovász19]). The Schrijver theta function $\vartheta '(G)$ is a smaller upper bound for $\alpha (G)$ . Schrijver [Reference Schrijver22] proved that $\alpha (G)\leq \vartheta '(G)\leq \vartheta (G)$ . By considering the relation between $\alpha (G)$ and the rank of matrices associated with G, Haemers [Reference Haemers16] proved that $\Theta (G)$ is at most the minimum rank of a class of graph matrices.

Generalized inverses of graph matrices have important applications in random walks on graphs [Reference Chung and Yau6] and the sparsification of graphs [Reference Spielman24]. This paper uses generalized inverses methods to study $\alpha (G)$ , $\Theta (G)$ , $\vartheta (G),$ and $\vartheta '(G)$ , and gives bounds which unify the Lovász theta function, Schrijver theta function, and Hoffman-type bounds. Based on the generalized inverses method, we obtain simple structural and spectral conditions to determine $\alpha (G)$ , $\Theta (G)$ , $\vartheta (G),$ and a maximum independent set in G. Our conditions can also be used to find graphs with $\Theta (G)=\vartheta (G)$ , which is a partial answer to the problem posed by Lovász.

The paper is organized as follows: In Section 2, we introduce some auxiliary lemmas. In Section 3, we give bounds for the sizes of independent sets and the independence number of graphs by using generalized inverses and eigenvalues of graph matrices. In Sections 46, we show that our bounds unify the Lovász theta function, Schrijver theta function, and Hoffman-type bounds in [Reference Brouwer and Haemers5, Reference Godsil and Newman14, Reference Harant and Richter18, Reference Lu, Liu and Tian20], and obtain the necessary and sufficient conditions of graphs attaining the bounds. In Section 7, we give some simple structural and spectral conditions for determining a maximum independent set, the independence number, the Shannon capacity, and the Lovász theta function of a graph. Some concluding remarks are given in Section 8.

2 Preliminaries

For a real square matrix M, the group inverse of M, denoted by $M^\#$ , is the matrix X such that $MXM=M,~XMX=X$ , and $MX=XM$ . It is known [Reference Ben-Israel and Greville2, p. 156] that $M^\#$ exists if and only if $\mathrm {rank}(M)=\mathrm {rank}(M^2)$ . If $M^\#$ exists, then $M^\#$ is unique.

For a positive semidefinite real matrix M, there exists an orthogonal matrix U such that

$$ \begin{align*} M=U\mathrm{diag}(\lambda_1,\ldots,\lambda_n)U^\top, \end{align*} $$

where $\mathrm {diag}(\lambda _1,\ldots ,\lambda _n)$ denotes a diagonal matrix with nonnegative diagonal entries $\lambda _1,\ldots ,\lambda _n$ . We set $M^{\frac {1}{2}}=U\mathrm { diag}(\lambda _1^{\frac {1}{2}},\ldots ,\lambda _n^{\frac {1}{2}})U^\top $ and $M^{-\frac {1}{2}}=U\mathrm {diag}((\lambda _1^+)^{\frac {1}{2}},\ldots , (\lambda _n^+)^{\frac {1}{2}})U^\top $ , where $\lambda _i^+=\lambda _i^{-1}$ if $\lambda _i>0$ , and $\lambda _i^+=0$ if $\lambda _i=0$ . Then

$$ \begin{align*} M^\#=U\mathrm{diag}(\lambda_1^+,\ldots,\lambda_n^+)U^\top=(M^{-\frac{1}{2}})^2. \end{align*} $$

Lemma 2.1 [Reference Ben-Israel and Greville2, pp. 162–163]

Let M be a square matrix such that $M^\#$ exists, and let $\lambda \neq 0$ be an eigenvalue of M with an eigenvector x. Then

$$ \begin{align*} M^\#x=\lambda^{-1}x. \end{align*} $$

For a real matrix A, the Moore–Penrose inverse of A is the real matrix X such that $AXA=A$ , $XAX=X$ , $(AX)^\top =AX,$ and $(XA)^\top =XA$ . Let $A^+$ denote the Moore–Penrose inverse of A. It is well known that $A^+$ exists and is unique.

Let $R(M)=\{x:x=My,y\in \mathbb {R}^n\}$ denote the range of an $m\times n$ real matrix M.

Lemma 2.2 [Reference Ben-Israel and Greville2, pp. 43, 49]

Let A be a real matrix. Then

$$ \begin{align*} R(AA^\top)&=R(A)=R(AA^+),\\ (A^\top A)^\#A^\top&=A^+. \end{align*} $$

Lemma 2.3 Let A be an $n\times m$ real matrix, and let x be a unit real vector of dimension n. Then

$$ \begin{align*} x^\top AA^+x\leq1, \end{align*} $$

with equality if and only if $x\in R(A)$ .

Proof It is known [Reference Ben-Israel and Greville2] that $AA^+$ is a real symmetric idempotent matrix. So each eigenvalue of $AA^+$ is $0$ or $1$ . For a unit real vector x, we have $x^\top AA^+x\leq 1$ , with equality if and only if $x\in R(AA^+)$ . By Lemma 2.2, $x\in R(AA^+)$ is equivalent to $x\in R(A)$ .

We now introduce the Lovász theta function defined in [Reference Lovász19]. An orthonormal representation of an n-vertex graph G is a set $\{u_1,\ldots ,u_n\}$ of unit real vectors such that $u_i^\top u_j=0$ if i and j are two nonadjacent vertices in G. The value of an orthonormal representation $\{u_1,\ldots ,u_n\}$ is defined to be

$$ \begin{align*} \min_c\max_{1\leq i\leq n}\frac{1}{(c^\top u_i)^2}, \end{align*} $$

where c ranges over all unit real vectors. The Lovász theta function $\vartheta (G)$ is the minimum value of all orthonormal representations of G.

Let $(M)_{ij}$ denote the $(i,j)$ -entry of a matrix M, and let $\lambda _1(A)$ denote the largest eigenvalue of a real symmetric matrix A. The independence number $\alpha (G)$ , the Shannon capacity $\Theta (G),$ and the Lovász theta function $\vartheta (G)$ have the following relations.

Lemma 2.4 [Reference Lovász19]

For any graph G, we have

$$ \begin{align*} \alpha(G)\leq\Theta(G)\leq\vartheta(G)=\min_{A}\lambda_1(A), \end{align*} $$

where A ranges over all real symmetric matrices indexed by vertices of G such that $(A)_{ij}=1$ when $i=j$ or $i,j$ are two nonadjacent vertices.

For a graph G, its Schrijver theta function $\vartheta '(G)$ is defined as [Reference Schrijver22]

$$ \begin{align*} \vartheta'(G)=\min_{A}\lambda_1(A), \end{align*} $$

where A ranges over all real symmetric matrices indexed by vertices of G such that $(A)_{ij}\geq 1$ when $i=j$ or $i,j$ are two nonadjacent vertices.

Lemma 2.5 [Reference Schrijver22]

For any graph G, we have

$$ \begin{align*} \alpha(G)\leq\vartheta'(G). \end{align*} $$

For an n-vertex graph G, the adjacency matrix A of G is an $n\times n$ symmetric matrix with entries

$$ \begin{align*} (A)_{ij}=\begin{cases}1,~~~~~~~~~~~~~~~~~~\mbox{if}~\{i,j\}\in E(G),\\ 0,~~~~~~~~~~~~~~~~~\kern0.5pt\mbox{if}~\{i,j\}\notin E(G),\end{cases} \end{align*} $$

where $E(G)$ denotes the edge set of G. Eigenvalues of the adjacency matrix of G are called adjacency eigenvalues of G. The largest adjacency eigenvalue of a graph has the following upper bound.

The following lemma will be used later in the proof of Example 4.3.

Lemma 2.6 [Reference Berman and Zhang3]

Let G be a graph with adjacency matrix A. Then

$$ \begin{align*} \lambda_1(A)\leq\max_{\{u,v\}\in E(G)}\sqrt{d_ud_v}, \end{align*} $$

where $d_u$ denotes the degree of a vertex u.

3 Bounds for independent sets and independence number

Let $A[S]$ denote the principal submatrix of a square matrix A determined by the rows and columns whose index set is S. A real vector $x=(x_1,\ldots ,x_n)^\top $ is called total nonzero if $x_i\neq 0$ for $i=1,\ldots ,n$ .

For an n-vertex graph G, let $V(G)$ denote the vertex set of G, and let $\mathcal {P}(G)$ denote the set of real matrix-vector pairs $(M,x)$ such that:

(a) M is a positive semidefinite $n\times n$ matrix indexed by vertices of G.

(b) $x=(x_1,\ldots ,x_n)^\top \in R(M)$ is total nonzero and $(M)_{ij}x_ix_j\leq 0$ if $i,j$ are two nonadjacent distinct vertices.

For $(M,x)\in \mathcal {P}(G)$ and vertex subset $T\subseteq V(G)$ , let $F(M,x)$ and $F_T(M,x)$ denote the following functions:

$$ \begin{align*} F(M,x)&=x^\top M^\#x\max_{u\in V(G)}\frac{(M)_{uu}}{x_u^2},\\ F_T(M,x)&=\frac{x^\top M^\#x}{|T|}\sum_{u\in T}\frac{(M)_{uu}}{x_u^2}. \end{align*} $$

In [Reference Godsil and Newman14], Godsil and Newman gave bounds on the size of an independent set by using positive semidefinite graph matrices. Motivated by the ideas of Godsil and Newman, we give the following bounds in terms of generalized inverses of positive semidefinite graph matrices.

Theorem 3.1 Let $(M,x)\in \mathcal {P}(G)$ be a matrix-vector pair associated with an n-vertex graph G. Then the following statements hold:

(1) For any independent set S of G, we have

$$ \begin{align*} |S|\leq F_S(M,x), \end{align*} $$

with equality if and only if $M[S]$ is diagonal and there exists a constant c such that

$$ \begin{align*} \frac{(M)_{vv}}{x_v^2}&=c~(v\in S),\\ cx_v&=\sum_{u\in S}(M)_{vu}x_u^{-1}~(v\notin S). \end{align*} $$

(2) The independence number $\alpha (G)$ satisfies

$$ \begin{align*} \alpha(G)\leq F(M,x), \end{align*} $$

with equality if and only if G has an independent set C such that $M[C]$ is diagonal and

$$ \begin{align*} \frac{(M)_{vv}}{x_v^2}&=\max_{u\in V(G)}\frac{(M)_{uu}}{x_u^2}=c~(v\in C),\\ cx_v&=\sum_{u\in C}(M)_{vu}x_u^{-1}~(v\notin C). \end{align*} $$

Moreover, if an independent set C satisfies the above conditions, then

$$ \begin{align*} \alpha(G)=|C|=F(M,x). \end{align*} $$

(3) Suppose that $\frac {(M)_{11}}{x_1^2}\geq \cdots \geq \frac {(M)_{nn}}{x_n^2}$ are arranged in a decreasing order. If $\alpha (G)\geq t$ for some positive integer t, then

$$ \begin{align*} \alpha(G)\leq F_T(M,x), \end{align*} $$

where $T=\{1,\ldots ,t\}$ .

Proof Since $M^{\frac {1}{2}}M^{-\frac {1}{2}}M=M$ and $x\in R(M)$ , we have

$$ \begin{align*} M^{\frac{1}{2}}M^{-\frac{1}{2}}x=x. \end{align*} $$

For any independent set S of G, let $y=(y_1,\ldots ,y_n)^\top $ be the vector such that

$$ \begin{align*} y_u=\begin{cases}x_u^{-1},~~~~~~~~~~~~~\mbox{if}~u\in S,\\ 0,~~~~~~~~~~~~~~~~\mbox{if}~u\notin S.\end{cases} \end{align*} $$

By the Cauchy–Schwarz inequality, we have

$$ \begin{align*} |S|^2=(y^\top x)^2=(y^\top M^{\frac{1}{2}}M^{-\frac{1}{2}}x)^2\leq(y^\top My)(x^\top M^\#x). \end{align*} $$

Since $(M)_{ij}x_ix_j\leq 0$ when $i,j$ are two nonadjacent vertices, we get

$$ \begin{align*} |S|^2&\leq (y^\top My)(x^\top M^\#x)\leq x^\top M^\#x\sum_{u\in S}\frac{(M)_{uu}}{x_u^2},\\ |S|&\leq F_S(M,x), \end{align*} $$

with equality if and only if $M[S]$ is diagonal and there exists a constant c such that $M^{\frac {1}{2}}y=cM^{-\frac {1}{2}}x$ , that is, $My=cx$ (because $M^{\frac {1}{2}}M^{-\frac {1}{2}}x=x$ ). When $M[S]$ is diagonal, $My=cx$ is equivalent to

$$ \begin{align*} \frac{(M)_{vv}}{x_v^2}&= c~(v\in S),\\ cx_v&= \sum_{u\in S}(M)_{vu}x_u^{-1}~(v\notin S). \end{align*} $$

So part (1) holds.

Notice that $|S|\leq F_S(M,x)\leq F(M,x)$ for any independent set S of G. Hence

$$ \begin{align*} \alpha(G)\leq F(M,x). \end{align*} $$

If the equality holds, then G has an independent set C such that $M[C]$ is diagonal and

$$ \begin{align*} \frac{(M)_{vv}}{x_v^2}&= \max_{u\in V(G)}\frac{(M)_{uu}}{x_u^2}=c~(v\in C),\\ cx_v&= \sum_{u\in C}(M)_{vu}x_u^{-1}~(v\notin C). \end{align*} $$

If an independent set C satisfies the above conditions, then

$$ \begin{align*} |C|^2&= (z^\top x)^2=(z^\top M^{\frac{1}{2}}M^{-\frac{1}{2}}x)^2=(z^\top Mz)(x^\top M^\#x)=|C|F(M,x),\\ |C|&= F(M,x), \end{align*} $$

where $z=(z_1,\ldots ,z_n)^\top $ is the vector such that

$$ \begin{align*} z_u=\begin{cases}x_u^{-1},~~~~~~~~~~~~~\mbox{if}~u\in C,\\ 0,~~~~~~~~~~~~~~~~\mbox{if}~u\notin C.\end{cases} \end{align*} $$

Since $\alpha (G)\leq F(M,x)$ , we have

$$ \begin{align*} |C|=\alpha(G)=F(M,x). \end{align*} $$

So part (2) holds.

Suppose that $\frac {(M)_{11}}{x_1^2}\geq \cdots \geq \frac {(M)_{nn}}{x_n^2}$ are arranged in a decreasing order. Let C be an independent set such that $|C|=\alpha (G)$ . If $\alpha (G)\geq t$ , then

$$ \begin{align*} \alpha(G)\leq F_C(M,x)\leq F_T(M,x)~(T=\{1,\ldots,t\}). \end{align*} $$

So part (3) holds.

Remark 3.2 For a graph G, let M be a positive definite matrix indexed by vertices of G such that $(M)_{ij}=0$ if $i,j$ are two nonadjacent distinct vertices. Then $M^\#=M^{-1}$ and $(M,x)\in \mathcal {P}(G)$ for any real vector x. By part (2) of Theorem 3.1, we have

$$ \begin{align*} \alpha(G)\leq F(M,x)=x^\top M^{-1}x\max_{u\in V(G)}\frac{(M)_{uu}}{x_u^2} \end{align*} $$

for any total nonzero vector x.

The $\{1\}$ -inverse of M is a matrix X such that $MXM=M$ . If M is singular, then it has infinite $\{1\}$ -inverses (see [2, page 41]).

Remark 3.3 Let M be a real symmetric matrix. Since $MM^\#M=MM^{(1)}M=M$ for any $\{1\}$ -inverse $M^{(1)}$ of M, we have $x^\top M^\#x=x^\top M^{(1)}x$ for any $x\in R(M)$ . Hence, the upper bounds given in Theorem 3.1 can be obtained from any $\{1\}$ -inverse of M.

The signless Laplacian matrix of a graph G is defined as $D+A$ , where D is the diagonal matrix of vertex degrees of G, A is the adjacency matrix of G. The subdivision graph of G, denoted by $S(G)$ , is a graph obtained from G by replacing each edge of G by a path of length $2$ .

We can find concrete examples such that an upper bound derived from Theorem 3.1 is smaller than the Hoffman bound given in Theorem 1.2.

Example 3.4 Let G be a nonregular graph with n vertices, m edges, and minimum degree $\delta \geq 2$ . Then there exists $(M,x)\in \mathcal {P}(S(G))$ such that

$$ \begin{align*} \alpha(S(G))=x^\top M^\#x\max_{u\in V(S(G))}\frac{(M)_{uu}}{x_u^2}<\frac{(m+n)\lambda^2}{4+\lambda^2}, \end{align*} $$

where $\lambda $ is the maximum adjacency eigenvalue of $S(G)$ .

Proof The signless Laplacian matrix of $S(G)$ can be written as $M=\begin {pmatrix}2I&B^\top \\B&D\end {pmatrix}$ , where B is the vertex-edge incidence matrix of G, D is the diagonal matrix of vertex degrees of G. Let $x=\begin {pmatrix}2I&B^\top \\B&D\end {pmatrix}\begin {pmatrix}e\\0\end {pmatrix}=\begin {pmatrix}2e\\y\end {pmatrix}$ , where e is the all-ones vector, y is the vector of vertex degrees of G. Let C be the independent set in $S(G)$ corresponding to the edge set of G, then $(M,x)\in \mathcal {P}(S(G))$ and C satisfy the conditions given in part (2) of Theorem 3.1. By part (2) of Theorem 3.1, we get

$$ \begin{align*} \alpha(S(G))=x^\top M^\#x\max_{u\in V(S(G))}\frac{(M)_{uu}}{x_u^2}=|C|=m. \end{align*} $$

Let $\lambda $ be the maximum adjacency eigenvalue of $S(G)$ , then $-\lambda $ is the minimum adjacency eigenvalue of $S(G)$ (because $S(G)$ is bipartite). By Theorem 1.2, we have

$$ \begin{align*} \alpha(S(G))=x^\top M^\#x\max_{u\in V(S(G))}\frac{(M)_{uu}}{x_u^2}=m\leq\frac{(m+n)\lambda^2}{4+\lambda^2}. \end{align*} $$

We next show that the inequality is strict. It is known that $\lambda ^2$ is the maximum eigenvalue of the signless Laplacian matrix of G, and $\lambda ^2\geq \frac {4m}{n}$ , with equality if and only if G is regular (see [Reference Cvetković, Rowlinson and Simić7, Theorems 2.4.4 and 7.8.6]). Since G is nonregular, we have $\lambda ^2>\frac {4m}{n}$ , that is $m<\frac {(m+n)\lambda ^2}{4+\lambda ^2}$ .

For a connected graph G, if M be a nonnegative matrix indexed by $V(G)$ such that $(M)_{ij}>0$ if and only if $i,j$ are two adjacent vertices, then M is irreducible. By the Perron–Frobenius Theorem, the spectral radius $\rho $ of M is a positive eigenvalue of M, and there exists unique positive unit eigenvector associated with $\rho $ . The following eigenvalue bound follows from Theorem 3.1.

Corollary 3.5 Let M be a positive semidefinite nonnegative matrix associated with a connected graph G, and $(M)_{ij}>0$ if and only if $i,j$ are two adjacent vertices. Let $\rho $ be the spectral radius of M, and let x be the corresponding positive unit eigenvector. Then

$$ \begin{align*} \alpha(G)\leq\rho^{-1}\max_{u\in V(G)}\frac{(M)_{uu}}{x_u^2}. \end{align*} $$

Proof Since $Mx=\rho x$ , we have $(M,x)\in \mathcal {P}(G)$ . By Lemma 2.1, we get

$$ \begin{align*} M^\#x=\rho^{-1}x. \end{align*} $$

By part (2) of Theorem 3.1, we get

$$ \begin{align*} \alpha(G)\leq F(M,x)=x^\top M^\#x\max_{u\in V(G)}\frac{(M)_{uu}}{x_u^2}=\rho^{-1}\max_{u\in V(G)}\frac{(M)_{uu}}{x_u^2}.\\[-40pt] \end{align*} $$

The Laplacian matrix of a graph G is defined as $L=D-A$ , where D is the diagonal matrix of vertex degrees of G, A is the adjacency matrix of G. It is well known that L is positive semidefinite. The following result follows from Theorem 3.1.

Corollary 3.6 Let G be a connected graph with Laplacian matrix L. For any total nonzero vector x satisfying $\sum _{u\in V(G)}x_u=0$ , we have

$$ \begin{align*} \alpha(G)\leq F(L,x). \end{align*} $$

Proof Since G is connected, the null space of L has dimension one. Since $Le=0$ for the all-ones vector e, we have $x\in R(L)$ for any vector x satisfying $e^\top x=\sum _{u\in V(G)} x_u=0$ . By part (2) of Theorem 3.1, we get

$$ \begin{align*} \alpha(G)\leq F(L,x).\\[-34pt] \end{align*} $$

4 Lovász theta function and Shannon capacity

For a graph G, let $\mathcal {M}(G)$ denote the set of real matrix-vector pairs $(M,x)$ such that:

(a) M is a positive semidefinite matrix indexed by vertices of G such that $(M)_{ij}=0$ if $i,j$ are two nonadjacent distinct vertices.

(b) $x\in R(M)$ is a total nonzero vector.

Since $\mathcal {M}(G)$ is a subset of $\mathcal {P}(G)$ , the notations $F(M,x)$ and $F_T(M,x)$ defined in Section 3 can be used for $(M,x)\in \mathcal {M}(G)$ directly.

In the following theorem, we show that the Lovász theta function $\vartheta (G)$ is a special upper bound given in Theorem 3.1, and obtain the necessary and sufficient conditions of graphs satisfying $\alpha (G)=\vartheta (G)$ .

Theorem 4.1 For any graph G, we have

$$ \begin{align*} \alpha(G)\leq\min_{(M,x)\in\mathcal{M}(G)}F(M,x)=\vartheta(G), \end{align*} $$

with equality if and only if there exist $(M,x)\in \mathcal {M}(G)$ and an independent set C satisfying the conditions given in part (2) of Theorem 3.1.

Proof By Theorem 3.1, we have

$$ \begin{align*} \alpha(G)\leq\min_{(M,x)\in\mathcal{M}(G)}F(M,x), \end{align*} $$

with equality if and only if there exist $(M,x)\in \mathcal {M}(G)$ and an independent set C satisfying the conditions in part (2) of Theorem 3.1. We need to prove that the minimum value equals to $\vartheta (G)$ .

We first prove that the minimum value does not exceed $\vartheta (G)$ . Let $c,u_1,\ldots ,u_n$ be unit real vectors such that $\{u_1,\ldots ,u_n\}$ is an orthonormal representation of G and

$$ \begin{align*} \vartheta(G)=\max_{1\leq i\leq n}\frac{1}{(c^\top u_i)^2}. \end{align*} $$

Let U be the matrix whose ith column is $u_i$ , then each diagonal entry of $U^\top U$ is $1$ . Take $z=U^\top c$ , then $(U^\top U,z)\in \mathcal {M}(G)$ and

$$ \begin{align*} z_i=c^\top u_i\neq0,~i=1,\ldots,n. \end{align*} $$

By Lemmas 2.2 and 2.3, we have

$$ \begin{align*} F(U^\top U,z)=z^\top(U^\top U)^\#z\max_{1\leq i\leq n}\frac{1}{z_i^2}=\vartheta(G)c^\top UU^+c\leq\vartheta(G). \end{align*} $$

So we have

$$ \begin{align*} \min_{(M,x)\in\mathcal{M}(G)}F(M,x)\leq F(U^\top U,z)\leq\vartheta(G). \end{align*} $$

We turn to prove that $\min _{(M,x)\in \mathcal {M}(G)}F(M,x)$ is at least $\vartheta (G)$ . There exists $(M_0,y)\in \mathcal {M}(G)$ such that

$$ \begin{align*} F(M_0,y)=y^\top(M_0)^\#y\max_{1\leq i\leq n}\frac{(M_0)_{ii}}{y_i^2}=\min_{(M,x)\in\mathcal{M}(G)}F(M,x). \end{align*} $$

Since $M_0$ is positive semidefinite, there exists real matrix B such that $M_0=B^\top B$ . Since $y\in R(M_0)$ , we can choose y such that $y=B^\top c$ for a unit real vector $c\in R(B)$ . Let $b_i$ be the ith column of B. By Lemmas 2.2 and 2.3, we have

$$ \begin{align*} F(M_0,y)=y^\top(M_0)^\#y\max_{1\leq i\leq n}\frac{(M_0)_{ii}}{y_i^2}=c^\top BB^+c\max_{1\leq i\leq n}\frac{b_i^\top b_i}{(c^\top b_i)^2}=\max_{1\leq i\leq n}\frac{b_i^\top b_i}{(c^\top b_i)^2}. \end{align*} $$

We next prove that $F(M_0,y)\geq \vartheta (G)$ by using the method similar with proof of [Reference Lovász19, Theorem 3]. Let $A=(a_{ij})_{n\times n}$ be the matrix with entries

$$ \begin{align*} a_{ii}&= 1,~i=1,\ldots,n,\\ a_{ij}&= 1-\frac{b_i^\top b_j}{(c^\top b_i)(c^\top b_j)}=1-\frac{(M_0)_{ij}}{y_iy_j},~i\neq j. \end{align*} $$

By Lemma 2.4, we get

$$ \begin{align*} \lambda_1(A)\geq\vartheta(G). \end{align*} $$

We set

$$ \begin{align*} \beta=F(M_0,y)=\max_{1\leq i\leq n}\frac{b_i^\top b_i}{(c^\top b_i)^2}, \end{align*} $$

then

$$ \begin{align*} -a_{ij}&= \left(c-\frac{b_i}{(c^\top b_i)}\right)^\top\left(c-\frac{b_j}{(c^\top b_j)}\right),~i\neq j,\\ \beta-a_{ii}&= \left(c-\frac{b_i}{(c^\top b_i)}\right)^\top\left(c-\frac{b_i}{(c^\top b_i)}\right)+\beta-\frac{b_i^\top b_i}{(c^\top b_i)^2}. \end{align*} $$

Hence, $\beta I-A$ (I is the identity matrix) is positive semidefinite, that is,

$$ \begin{align*} \min_{(M,x)\in\mathcal{M}(G)}F(M,x)=\beta\geq\lambda_1(A)\geq\vartheta(G).\\[-39pt] \end{align*} $$

The following conclusion comes from Theorems 3.1 and 4.1 and Lemma 2.4.

Theorem 4.2 Let $(M,x)\in \mathcal {M}(G)$ be a matrix-vector pair for graph G. If G has an independent set C such that $(M,x)$ and C satisfy the conditions given in part (2) of Theorem 3.1, then

$$ \begin{align*} \alpha(G)=\Theta(G)=\vartheta(G)=|C|=F(M,x). \end{align*} $$

A bipartite graph G is called semiregular with parameters $(n_1,n_2,r_1,r_2)$ if $V(G)$ has a bipartition $V(G)=V_1\cup V_2$ such that $|V_1|=n_1,|V_2|=n_2$ and vertices in the same color class have the same degree ( $n_i$ vertices of degree $r_i$ , $i=1,2$ ).

Let I denote the identity matrix. The following are some examples for Theorem 4.2.

Example 4.3 Let G be a graph with a spanning subgraph H, and let H be a semiregular bipartite graph with parameters $(n_1,n_2,r_1,r_2)$ ( $n_1r_1=n_2r_2>0,n_1\leq n_2$ ). If the independent set with $n_2$ vertices in H is also an independent set in G, then

$$ \begin{align*} \alpha(G)=\Theta(G)=\vartheta(G)=n_2. \end{align*} $$

Proof The adjacency matrix of H can be written as $A=\begin {pmatrix}0&B^\top \\B&0\end {pmatrix}$ , and each row (column) sum of $B\in \mathbb {R}^{n_1\times n_2}$ is $r_1$ ( $r_2$ ). By Lemma 2.6, we have $\lambda _1(A)\leq \sqrt {r_1r_2}$ . Then

$$ \begin{align*} \lambda_1(BB^\top)=\lambda_1(A^2)\leq r_1r_2. \end{align*} $$

Let $M=\begin {pmatrix}r_1I&B^\top \\B&r_2I\end {pmatrix}$ , then the all-ones vector $e\in R(M)$ because $Me=(r_1+r_2)e$ . Since the Schur complement

$$ \begin{align*} r_2I-r_1^{-1}BB^\top=r_1^{-1}(r_1r_2I-BB^\top) \end{align*} $$

is positive semidefinite, M is positive semidefinite. Hence $(M,e)\in \mathcal {M}(G)$ . Let C be the independent set with $n_2$ vertices in H, which is also an independent set in G. Since $(M,e)$ and C satisfy the conditions given in part (2) of Theorem 3.1, then by Theorem 4.2, we have

$$ \begin{align*} \alpha(G)=\Theta(G)=\vartheta(G)=|C|=\alpha(H).\\[-32pt]\end{align*} $$

For two disjoint graphs $G_1$ and $G_2$ , let $G_1\vee G_2$ denote the graph obtained from the union of $G_1$ and $G_2$ by joining each vertex of $G_1$ to each vertex of $G_2$ . Let $\overline {G}$ denote the complement of a graph G.

Example 4.4 Let G be an n-vertex graph with minimum degree $\delta $ . For any integer $s\geq n-\delta $ , we have

$$ \begin{align*} \alpha(G\vee\overline{K_s})=\Theta(G\vee\overline{K_s})=\vartheta(G\vee\overline{K_s})=s, \end{align*} $$

where $K_s$ is the complete graph with s vertices.

Proof The adjacency matrix of $G\vee \overline {K_s}$ can be written as $\begin {pmatrix}0&J_{s\times n}\\J_{s\times n}^\top &A\end {pmatrix}$ , where A is the adjacency matrix of G, $J_{s\times n}$ is the $s\times n$ all-ones matrix. Let L be the Laplacian matrix of G, and let $M=\begin {pmatrix}sI&J_{s\times n}\\J_{s\times n}^\top &nI-L\end {pmatrix}$ . Then the all-ones vector $e\in R(M)$ because $Me=(s+n)e$ . Since the Schur complement

$$ \begin{align*} nI-L-s^{-1}J_{s\times n}^\top J_{s\times n} \end{align*} $$

equals to the Laplacian matrix of $\overline {G}$ (which is positive semidefinite), M is positive semidefinite. Hence $(M,e)\in \mathcal {M}(G)$ . Let C be the independent set of size s in $G\vee \overline {K_s}$ . Since $(M,e)$ and C satisfy the conditions given in part (2) of Theorem 3.1, then by Theorem 4.2, we have

$$ \begin{align*} \alpha(G\vee\overline{K_s})=\Theta(G\vee\overline{K_s})=\vartheta(G\vee\overline{K_s})=|C|=s.\\[-34pt] \end{align*} $$

Let $e=(1,\ldots ,1)^\top $ denote the all-ones vector.

Corollary 4.5 Let G be a graph with adjacency matrix A and $A+\lambda I$ is positive definite for some $\lambda>0$ . Then

$$ \begin{align*} \alpha(G)\leq\lambda e^\top(A+\lambda I)^{-1}e, \end{align*} $$

with equality if and only if there exists an independent set C such that every vertex not in C is adjacent to exactly $\lambda $ vertices of C. Moreover, if an independent set C satisfies the above condition, then

$$ \begin{align*} \alpha(G)=\Theta(G)=\vartheta(G)=|C|=\lambda e^\top(A+\lambda I)^{-1}e. \end{align*} $$

Proof Since $A+\lambda I$ is positive definite, we have $(A+\lambda I,e)\in \mathcal {M}(G)$ and $F(A+\lambda I,e)=\lambda e^\top (A+\lambda I)^{-1}e$ . The conclusions follow from Theorems 3.1 and 4.2.

The following is an example for Corollary 4.5.

Example 4.6 For the path $P_{2k+1}$ with $2k+1$ vertices, let C be the independent set in $P_{2k+1}$ with $k+1$ vertices. Then every vertex not in C is adjacent to exactly two vertices of C. By Corollary 4.5, we have

$$ \begin{align*} \alpha(P_{2k+1})=\Theta(P_{2k+1})=\vartheta(P_{2k+1})=|C|=2e^\top(A+2I)^{-1}e=k+1, \end{align*} $$

where A is the adjacency matrix of $P_{2k+1}$ .

A clique covering of G is a set $\varepsilon =\{Q_1,\ldots ,Q_r\}$ of cliques in G that cover all edges of G, that is, each edge of G belongs to at least one $Q_i$ . For a clique covering $\varepsilon $ and a vertex u of G, the $\varepsilon $ -degree of u is the number of cliques in $\varepsilon $ containing u, denoted by $d_u^\varepsilon $ . The clique covering matrix $A_\varepsilon $ is the $n\times n$ symmetric matrix with entries

$$ \begin{align*} (A_\varepsilon)_{ij}=\begin{cases}d_i^\varepsilon,~~~~~~~~~~~~~~\mbox{if}~i=j,\\ w_{ij},~~~~~~~~~~~~\mbox{if}~\{i,j\}\in E(G),\\ 0,~~~~~~~~~~~~~~~\mbox{if}~\{i,j\}\notin E(G),\end{cases} \end{align*} $$

where $w_{ij}$ is the number of cliques in $\varepsilon $ containing the edge $\{i,j\}$ .

Theorem 4.7 Let G be a graph without isolated vertices. For any clique covering $\varepsilon $ of G, we have

$$ \begin{align*} \alpha(G)\leq F(A_\varepsilon,x), \end{align*} $$

where x is a vector satisfying $x_u=d_u^\varepsilon $ . The equality holds if and only if there exists an independent set C such that $d_v^\varepsilon =\min _{u\in V(G)}d_u^\varepsilon $ for each $v\in C$ , and $|C\cap Q|=1$ for each $Q\in \varepsilon $ . Moreover, if an independent set C satisfies the above conditions, then

$$ \begin{align*} \alpha(G)=\Theta(G)=\vartheta(G)=|C|=F(A_\varepsilon,x). \end{align*} $$

Proof For a clique covering $\varepsilon =\{Q_1,\ldots ,Q_r\}$ of G, the corresponding vertex-clique incidence matrix B is a $|V(G)|\times r$ matrix with entries

$$ \begin{align*} (B)_{ij}=\begin{cases}1,~~~~~~~~~~~~~~~~~~\mbox{if}~i\in V(G),i\in Q_j,\\ 0,~~~~~~~~~~~~~~~~~~\mbox{if}~i\in V(G),i\notin Q_j.\end{cases} \end{align*} $$

Then $BB^\top =A_\varepsilon $ . Notice that $x=Be\in R(B)=R(A_\varepsilon )$ , so $(A_\varepsilon ,x)\in \mathcal {M}(G)$ . The conclusions follow from Theorems 3.1 and 4.2.

The following is an example for Theorem 4.7.

Example 4.8 The graph G in Figure 1 has a clique covering $\varepsilon =\{Q_1,Q_2,Q_3\}$ , where $Q_1=\{1,2,3\},Q_2=\{2,4,5\},Q_3=\{3,5,6\}$ . Then G has $\varepsilon $ -degrees $d_1^\varepsilon =1$ , $d_2^\varepsilon =2$ , $d_3^\varepsilon =2$ , $d_4^\varepsilon =1$ , $d_5^\varepsilon =2$ , $d_6^\varepsilon =1$ (see [Reference Zhou and Bu25]). The independent set $C=\{1,4,6\}$ satisfies the conditions in Theorem 4.7. So we have

$$ \begin{align*} \alpha(G)=\Theta(G)=\vartheta(G)=|C|=3. \end{align*} $$

Figure 1: Graph G.

Let $m(H)$ denote the matching number of a hypergraph H. The intersection graph $\Omega (H)$ of H has vertex set $E(H)$ , and two vertices $f_1,f_2$ of $\Omega (H)$ are adjacent if and only if $f_1\cap f_2\neq \emptyset $ . Clearly, we have $m(H)=\alpha (\Omega (H))$ . The vertex-edge incidence matrix B of H is a $|V(H)|\times |E(H)|$ matrix with entries

$$ \begin{align*} (B)_{uf}=\begin{cases}1,~~~~~~~~~~~~~~~~~~\mbox{if}~u\in V(H),u\in f\in E(H),\\ 0,~~~~~~~~~~~~~~~~~~\mbox{if}~u\in V(H),u\notin f\in E(H).\end{cases} \end{align*} $$

We say that H is k-uniform if each edge of H has exactly k vertices.

Theorem 4.9 Let H be a k-uniform hypergraph without isolated vertices, and let B be the vertex-edge incidence matrix of H. Then

$$ \begin{align*} m(H)\leq ke^\top(B^\top B)^\#e, \end{align*} $$

with equality if and only if H has a perfect matching. If H has a perfect matching, then

$$ \begin{align*} m(H)=\Theta(\Omega(H))=\vartheta(\Omega(H))=ke^\top(B^\top B)^\#e. \end{align*} $$

Proof Notice that $e=k^{-1}B^\top e\in R(B^\top )=R(B^\top B)$ , so $(B^\top B,e)\in \mathcal {M}(\Omega (H))$ and $F(B^\top B,e)=ke^\top (B^\top B)^\#e$ . The conclusions follow from Theorems 3.1 and 4.2.

5 Schrijver theta function

In the following theorem, we show that the Schrijver theta function $\vartheta (G)$ is the minimum upper bound given in part (2) of Theorem 3.1, and obtain the necessary and sufficient conditions of graphs satisfying $\alpha (G)=\vartheta '(G)$ .

Theorem 5.1 For any graph G, we have

$$ \begin{align*} \alpha(G)\leq\min_{(M,x)\in\mathcal{P}(G)}F(M,x)=\vartheta'(G), \end{align*} $$

with equality if and only if there exist $(M,x)\in \mathcal {P}(G)$ and an independent set C satisfying the conditions given in part (2) of Theorem 3.1.

Proof By Theorem 3.1, we have

$$ \begin{align*} \alpha(G)\leq\min_{(M,x)\in\mathcal{P}(G)}F(M,x), \end{align*} $$

with equality if and only if there exist $(M,x)\in \mathcal {P}(G)$ and an independent set C satisfying the conditions given in part (2) of Theorem 3.1. We will prove that the minimum value equals to $\vartheta '(G)$ by using the method similar with proof of [Reference Lovász19, Theorem 3].

We first prove that the minimum value does not exceed $\vartheta '(G)$ . Let $A=(a_{ij})_{n\times n}$ be the real symmetric matrix indexed by vertices of G such that $\lambda _1(A)=\vartheta '(G)$ , and $a_{ij}\geq 1$ when $i=j$ or $i,j$ are two nonadjacent vertices. Then $\vartheta '(G)I-A$ is positive semidefinite. So there exists real matrix T such that

$$ \begin{align*} \vartheta'(G)I-A=T^\top T \end{align*} $$

and the rank of T is smaller than the number of rows of T. Let c be a unit real vector such that $c^\top T=0$ . Suppose that $t_i$ is the ith column of T, then

$$ \begin{align*} t_i^\top t_j=\vartheta'(G)\delta_{ij}-a_{ij}, \end{align*} $$

where $\delta _{ij}=1$ if $i=j$ , and $\delta _{ij}=0$ if $i\neq j$ . We set

$$ \begin{align*} u_i=\vartheta'(G)^{-\frac{1}{2}}(c+t_i),~i=1,\ldots,n, \end{align*} $$

and let U be the matrix whose ith column is $u_i$ . Then

$$ \begin{align*} (U^\top U)_{ii}&= \vartheta'(G)^{-1}(1+\vartheta'(G)-a_{ii})\leq1,~i\in V(G),\\ (U^\top U)_{ij}&= \vartheta'(G)^{-1}(1-a_{ij})\leq0,~\{i,j\}\notin E(G). \end{align*} $$

Take $z=U^\top c$ , then $z_i=c^\top u_i=\vartheta '(G)^{-\frac {1}{2}}$ ( $i=1,\ldots ,n$ ) and $(U^\top U,z)\in \mathcal {P}(G)$ . By Lemmas 2.2 and 2.3, we have

$$ \begin{align*} F(U^\top U,z)=z^\top(U^\top U)^\#z\max_{1\leq i\leq n}\frac{(U^\top U)_{ii}}{z_i^2}\leq\vartheta'(G)c^\top UU^+c\leq\vartheta'(G). \end{align*} $$

So we have

$$ \begin{align*} \min_{(M,x)\in\mathcal{P}(G)}F(M,x)\leq F(U^\top U,z)\leq\vartheta'(G). \end{align*} $$

We next prove that $\min _{(M,x)\in \mathcal {P}(G)}F(M,x)$ is at least $\vartheta '(G)$ . There exists $(M_0,y)\in \mathcal {P}(G)$ such that

$$ \begin{align*} F(M_0,y)=y^\top(M_0)^\#y\max_{1\leq i\leq n}\frac{(M_0)_{ii}}{y_i^2}=\min_{(M,x)\in\mathcal{P}(G)}F(M,x). \end{align*} $$

Since $M_0$ is positive semidefinite, there exists matrix B such that $M_0=B^\top B$ . Since $y\in R(M_0)$ , we can choose y such that $y=B^\top c$ for a unit real vector $c\in R(B)$ . Let $b_i$ be the ith column of B. By Lemmas 2.2 and 2.3, we have

$$ \begin{align*} F(M_0,y)=y^\top(M_0)^\#y\max_{1\leq i\leq n}\frac{(M_0)_{ii}}{y_i^2}=c^\top BB^+c\max_{1\leq i\leq n}\frac{b_i^\top b_i}{(c^\top b_i)^2}=\max_{1\leq i\leq n}\frac{b_i^\top b_i}{(c^\top b_i)^2}. \end{align*} $$

Let $A=(a_{ij})_{n\times n}$ be the matrix with entries

$$ \begin{align*} a_{ii}&= 1,~i=1,\ldots,n,\\ a_{ij}&= 1-\frac{b_i^\top b_j}{(c^\top b_i)(c^\top b_j)}=1-\frac{(M_0)_{ij}}{y_iy_j},~i\neq j. \end{align*} $$

Since $(M_0)_{ij}y_iy_j\leq 0$ for nonadjacent vertices $i,j$ , we have $a_{ij}\geq 1$ when $i=j$ or $i,j$ are two nonadjacent vertices. By the definition of $\vartheta '(G)$ , we have

$$ \begin{align*} \lambda_1(A)\geq\vartheta'(G). \end{align*} $$

We set

$$ \begin{align*} \beta=F(M_0,y)=\max_{1\leq i\leq n}\frac{b_i^\top b_i}{(c^\top b_i)^2}, \end{align*} $$

then

$$ \begin{align*} -a_{ij}&= \left(c-\frac{b_i}{(c^\top b_i)}\right)^\top\left(c-\frac{b_j}{(c^\top b_j)}\right),~i\neq j,\\ \beta-a_{ii}&= \left(c-\frac{b_i}{(c^\top b_i)}\right)^\top\left(c-\frac{b_i}{(c^\top b_i)}\right)+\beta-\frac{b_i^\top b_i}{(c^\top b_i)^2}. \end{align*} $$

Hence, $\beta I-A$ is positive semidefinite, that is,

$$ \begin{align*} \min_{(M,x)\in\mathcal{P}(G)}F(M,x)=\beta\geq\lambda_1(A)\geq\vartheta'(G). \end{align*} $$

The following is an example for a graph G satisfying $\alpha (G)=\vartheta '(G)<\vartheta (G)$ (see [Reference Schrijver22]).

Example 5.2 [Reference Schrijver22]

Let G be a graph with vertex set $\{0,1\}^6$ , and two vertices are adjacent if and only if their Hamming distance is at most $3$ . Then $\{000000,001111, 110011,111100\}$ is an independent set of G. It is known that $4=\vartheta '(G)<\vartheta (G)=16/3$ .

By Theorems 3.1 and 5.1 and Lemma 2.5, we can get the following conclusion.

Theorem 5.3 Let $(M,x)\in \mathcal {P}(G)$ be a matrix-vector pair for graph G. If G has an independent set C such that $(M,x)$ and C satisfy the conditions given in part (2) of Theorem 3.1, then

$$ \begin{align*} \alpha(G)=\vartheta'(G)=|C|=F(M,x). \end{align*} $$

6 Hoffman-type bounds

In this section, we show that the Hoffman-type bounds in [Reference Brouwer and Haemers5, Reference Godsil and Newman14, Reference Harant and Richter18, Reference Lu, Liu and Tian20] are special cases of Theorem 3.1.

By Theorem 3.1, we can get the following classical Hoffman bound, and obtain the necessary and sufficient conditions of graphs attaining the Hoffman bound. The necessary condition of the equality case given in Theorem 1.1 is indeed a sufficient condition.

Corollary 6.1 Let G be a k-regular ( $k\neq 0$ ) graph with n vertices, and let $\tau $ be the minimum adjacency eigenvalue of G. Then

$$ \begin{align*} \alpha(G)\leq n\frac{|\tau|}{k-\tau}, \end{align*} $$

with equality if and only if there exists an independent set C such that every vertex not in C is adjacent to exactly $|\tau |$ vertices of C. Moreover, if an independent set C satisfies the above condition, then

$$ \begin{align*} \alpha(G)=\Theta(G)=\vartheta(G)=|C|=n\frac{|\tau|}{k-\tau}. \end{align*} $$

Proof Since G is k-regular, we have $(A-\tau I,e)\in \mathcal {M}(G)$ , where A is the adjacency matrix of G. By Lemma 2.1, we get

$$ \begin{align*} (A-\tau I)^\#e& = (k-\tau)^{-1}e,\\ F(A-\tau I,e)& = |\tau|e^\top(A-\tau I)^\#e=n\frac{|\tau|}{k-\tau}. \end{align*} $$

The conclusions follow from Theorems 3.1 and 4.2.

Let $d_u$ denote the degree of a vertex u, and let $\overline {d}_S=|S|^{-1}\sum _{u\in S}d_u$ denote the average degree in a vertex set S. The largest Laplacian eigenvalue of a graph G is the largest eigenvalue of the Laplacian matrix of G.

The following bound for independent sets was given in Theorem 4.3 of [Reference Godsil and Newman14] without sufficient condition of the equality case.

Corollary 6.2 [Reference Godsil and Newman14]

Let G be an n-vertex graph with the largest Laplacian eigenvalue $\mu>0$ . For any independent set S of G, we have

$$ \begin{align*} |S|\leq\frac{n(\mu-\overline{d}_S)}{\mu}, \end{align*} $$

with equality if and only if there exists a constant d such that $d_u=d$ for each $u\in S$ , and every vertex not in S is adjacent to exactly $\mu -d$ vertices of S.

Proof Let L be the Laplacian matrix of G. Since $(\mu I-L)e=\mu e$ , we have $(\mu I-L,e)\in \mathcal {M}(G)$ . By Lemma 2.1, we get

$$ \begin{align*} (\mu I-L)^\#e&= \mu^{-1}e,\\ F_S(\mu I-L,e)&= e^\top(\mu I-L)^\#e\sum_{u\in S}\frac{\mu-d_u}{|S|}=\frac{n}{\mu}(\mu-\overline{d}_S). \end{align*} $$

The conclusion follows from part (1) of Theorem 3.1.

The following bound was given independently in [Reference Lu, Liu and Tian20] and [Reference Godsil and Newman14], and the necessary condition of the equality case was given in Theorem 3.2 of [Reference Lu, Liu and Tian20].

Corollary 6.3 [Reference Godsil and Newman14, Reference Lu, Liu and Tian20]

Let G be an n-vertex graph with the largest Laplacian eigenvalue $\mu>0$ and the minimum degree $\delta $ . Then

$$ \begin{align*} \alpha(G)\leq\frac{n(\mu-\delta)}{\mu}, \end{align*} $$

with equality if and only if there exists an independent set C such that $d_u=\delta $ for each $u\in C$ , and every vertex not in C is adjacent to exactly $\mu -\delta $ vertices of C. Moreover, if an independent set C satisfies the above conditions, then

$$ \begin{align*} \alpha(G)=\Theta(G)=\vartheta(G)=|C|=\frac{n(\mu-\delta)}{\mu}. \end{align*} $$

Proof Let L be the Laplacian matrix of G. Since $(\mu I-L)e=\mu e$ , we have $(\mu I-L,e)\in \mathcal {M}(G)$ . By Lemma 2.1, we get

$$ \begin{align*} (\mu I-L)^\#e&= \mu^{-1}e,\\ F(\mu I-L,e)&= (\mu-\delta)e^\top(\mu I-L)^\#e=\frac{n(\mu-\delta)}{\mu}. \end{align*} $$

The conclusions follow from Theorems 3.1 and 4.2.

For a graph G without isolated vertices, the normalized Laplacian matrix $\mathcal {L}$ of G is the $n\times n$ symmetric matrix with entries

$$ \begin{align*} (\mathcal{L})_{ij}=\begin{cases}1,~~~~~~~~~~~~~~~~~~\mbox{if}~i=j,\\ -(d_id_j)^{-\frac{1}{2}},~~~~~~~\mbox{if}~\{i,j\}\in E(G),\\ 0,~~~~~~~~~~~~~~~~~~~\mbox{if}~\{i,j\}\notin E(G).\end{cases} \end{align*} $$

The largest normalized Laplacian eigenvalue of G is the largest eigenvalue of $\mathcal {L}$ .

The following bound was given in [Reference Harant and Richter18] without necessary and sufficient conditions of the equality case.

Corollary 6.4 [Reference Harant and Richter18]

Let G be a graph with m edges, the largest normalized Laplacian eigenvalue $\mu $ and the minimum degree $\delta>0$ . Then

$$ \begin{align*} \alpha(G)\leq\frac{2m(\mu-1)}{\mu\delta}, \end{align*} $$

with equality if and only if there exists an independent set C such that $d_u=\delta $ for each $u\in C$ , and every vertex v not in C is adjacent to exactly $(\mu -1)d_v$ vertices of C. Moreover, if an independent set C satisfies the above conditions, then

$$ \begin{align*} \alpha(G)=\Theta(G)=\vartheta(G)=|C|=\frac{2m(\mu-1)}{\mu\delta}. \end{align*} $$

Proof Let $\mathcal {L}$ be the normalized Laplacian matrix of G, and let $x=(d_1^{\frac {1}{2}},\ldots ,d_n^{\frac {1}{2}})^\top $ . Since $(\mu I-\mathcal {L})x=\mu x$ , we have $(\mu I-\mathcal {L},x)\in \mathcal {M}(G)$ . By Lemma 2.1, we get

$$ \begin{align*} (\mu I-L)^\#x&= \mu^{-1}x,\\ F(\mu I-\mathcal{L},x)&= \frac{\mu-1}{\delta}x^\top(\mu I-\mathcal{L})^\#x=\frac{2m(\mu-1)}{\mu\delta}. \end{align*} $$

The conclusions follow from Theorems 3.1 and 4.2.

The following bound was given in Theorem 6.1 of [Reference Godsil and Newman14] without necessary and sufficient conditions of the equality case.

Corollary 6.5 [Reference Godsil and Newman14]

Let G be an n-vertex graph, and M is a positive semidefinite matrix indexed by vertices of G such that:

(a) $(M)_{ij}\leq 0$ if $i,j$ are two nonadjacent vertices.

(b) M has constant row sum $r>0$ .

(c) Every diagonal entry of M is t.

Then

$$ \begin{align*} \alpha(G)\leq\frac{nt}{r}, \end{align*} $$

with equality if and only if G has an independent set C such that $M[C]$ is diagonal and $\sum _{u\in C}(M)_{vu}=t~(v\notin C)$ . Moreover, if an independent set C satisfies the above conditions, then

$$ \begin{align*} \alpha(G)=\vartheta'(G)=|C|=\frac{nt}{r}. \end{align*} $$

Proof Since $Me=re$ , we have $(M,e)\in \mathcal {P}(G)$ . By Lemma 2.1, we get

$$ \begin{align*} M^\#e&=r^{-1}e,\\ F(M,e)&=te^\top M^\#e=\frac{nt}{r}. \end{align*} $$

The conclusions follow from Theorems 3.1 and 5.3.

7 Structural and spectral conditions

Let $\delta (G)$ denote the minimum degree of a graph G. In the following theorem, we give some simple structural and spectral conditions for determining the independence number $\alpha (G)$ , the Shannon capacity $\Theta (G),$ and the Lovász theta function $\vartheta (G)$ . These conditions mean that if a given independent set C satisfies some properties, then C must be a maximum independent set in G, and $\Theta (G)=\vartheta (G)=|C|$ .

Theorem 7.1 Let G be a graph with an independent set C. Then

$$ \begin{align*} \alpha(G)=\Theta(G)=\vartheta(G)=|C| \end{align*} $$

if C satisfies one of the following conditions:

(1) Every vertex not in C is adjacent to exactly $\lambda>-\tau $ vertices of C, where $\tau $ is the minimum adjacency eigenvalue of G.

(2) Every vertex not in C is adjacent to exactly $-\tau $ vertices of C and G is regular, where $\tau <0$ is the minimum adjacency eigenvalue of G.

(3) $d_u=\delta (G)$ for each $u\in C$ , and every vertex not in C is adjacent to exactly $\mu -\delta (G)$ vertices of C, where $\mu>0$ is the largest Laplacian eigenvalue of G.

(4) $d_u=\delta (G)>0$ for each $u\in C$ , and every vertex v not in C is adjacent to exactly $(\mu -1)d_v$ vertices of C, where $\mu $ is the largest normalized Laplacian eigenvalue of G.

(5) Every vertex not in C is adjacent to all vertices of C and $2|C|\geq |V(G)|- \delta (G-C)$ .

(6) G has a spanning subgraph H ( $\delta (H)>0$ ) which is semiregular bipartite with parameters $(n_1,n_2,r_1,r_2)$ ( $n_1\leq n_2$ ), and C is the independent set with $n_2$ vertices in H.

(7) There exists a clique covering $\varepsilon $ such that $d_v^\varepsilon =\min _{u\in V(G)}d_u^\varepsilon>0$ for each $v\in C$ , and $|C\cap Q|=1$ for each $Q\in \varepsilon $ .

(8) There exists a uniform hypergraph H such that $G=\Omega (H)$ and C corresponds to a perfect matching in H.

Proof Parts (1) and (2) follow from Corollaries 4.5 and 6.1, respectively. Parts (3) and (4) follow from Corollaries 6.3 and 6.4, respectively. Parts (5) and (6) follow from Examples 4.4 and 4.3, respectively. Parts (7) and (8) follow from Theorems 4.7 and 4.9, respectively.

A family $\mathcal {F}$ of k-subsets of $\{1,\ldots ,n\}$ is called intersecting if $F_1\cap F_2\neq \emptyset $ for any $F_1,F_2\in \mathcal {F}$ . For $n\geq 2k$ , Erdös, Ko, and Rado [Reference Erdös, Ko and Rado11] proved that the maximum size of intersecting families of $\{1,\ldots ,n\}$ is $\binom {n-1}{k-1}$ .

The Kneser graph $K(n,k)$ is a graph whose vertices consist of all k-subsets of $\{1,\ldots ,n\}$ , and two vertices $U_1,U_2\subseteq \{1,\ldots ,n\}$ are adjacent if and only if $U_1\cap U_2=\emptyset $ . The Erdös–Ko–Rado Theorem is equivalent to say that $\alpha (K(n,k))=\binom {n-1}{k-1}$ for $n\geq 2k$ . It is known [Reference Godsil and Royle15, Theorem 9.4.3] that the minimum adjacency eigenvalue of $K(n,k)$ is $-\binom {n-k-1}{k-1}$ . By computation, the Hoffman bound for $K(n,k)$ is $\binom {n-1}{k-1}$ . So we have $\alpha (K(n,k))=\binom {n-1}{k-1}$ because $K(n,k)$ has an independent set of size $\binom {n-1}{k-1}$ .

We can also obtain the EKR Theorem without using the Hoffman bound. All k-subsets containing a fixed vertex $1$ forms an independent set C of size $\binom {n-1}{k-1}$ , and every vertex not in C is adjacent to exactly $\binom {n-k-1}{k-1}$ vertices of C. From part (2) of Theorem 7.1, we have

$$ \begin{align*} \alpha(K(n,k))=\Theta(K(n,k))=\vartheta(K(n,k))=|C|=\binom{n-1}{k-1}. \end{align*} $$

8 Concluding remarks

Let $M^{\otimes k}$ denote the Kronecker product of k copies of matrix M. If there is some k such that $\alpha (G^k)=\vartheta (G)^k$ , then $\Theta (G)=\vartheta (G)$ . We can derive the following characterization of graphs satisfying $\alpha (G^k)=\vartheta (G)^k$ .

Theorem 8.1 Let $(M,x)\in \mathcal {M}(G)$ be a matrix-vector pair for graph G such that $\vartheta (G)=F(M,x)$ . There is some k such that $\alpha (G^k)=\vartheta (G)^k$ if and only if $G^k$ has an independent set C such that $(M^{\otimes k},x^{\otimes k})$ and C satisfy the conditions given in part (2) of Theorem 3.1. Moreover, if $G^k$ has an independent set C satisfies the above conditions, then

$$ \begin{align*} \alpha(G^k)=\vartheta(G)^k=\Theta(G)^k=|C|. \end{align*} $$

Proof Since $(M,x)\in \mathcal {M}(G)$ , we have $(M^{\otimes k},x^{\otimes k})\in \mathcal {M}(G^k)$ . By Theorem 3.1, we have

$$ \begin{align*} \alpha(G^k)\leq F(M^{\otimes k},x^{\otimes k})=F(M,x)^k=\vartheta(G)^k, \end{align*} $$

with equality if and only if $G^k$ has an independent set C such that $(M^{\otimes k},x^{\otimes k})$ and C satisfy the conditions given in part (2) of Theorem 3.1. Moreover, if $G^k$ has an independent set C satisfies the above conditions, then by Theorem 4.2, we have

$$ \begin{align*} \alpha(G^k)=\vartheta(G^k)=\Theta(G^k)=|C|=F(M,x)^k=\vartheta(G)^k. \end{align*} $$

Since $\alpha (G^k)^{\frac {1}{k}}\leq \Theta (G)\leq \vartheta (G)$ , we have $\Theta (G)=\vartheta (G)$ .

In [Reference Lovász19], Lovász proved that $\Theta (C_5)=\sqrt {5}$ . The cycle $C_5$ is also an example for Theorem 8.1.

Example 8.2 Let A be the adjacency matrix of the cycle $C_5$ , then $(A+2I,e)\in \mathcal {M}(C_5)$ and $((A+2I)^{\otimes 2},e^{\otimes 2})\in \mathcal {M}(C_5^2)$ . Suppose that $\{0,1,2,3,4\}$ is the vertex set of $C_5$ and $E(C_5)=\{\{0,1\},\{1,2\},\{2,3\},\{3,4\},\{4,0\}\}$ . Then $S=\{00,12,24,31,43\}$ is an independent set of $C_5^2$ . By computation, we know that $((A+2I)^{\otimes 2},e^{\otimes 2})$ and S satisfy the conditions given in part (2) of Theorem 3.1. By Theorem 8.1, we have $\alpha (C_5^2)=\vartheta (C_5)^2=\Theta (C_5)^2=5$ .

Acknowledgment

The authors would like to thank the reviewer and Editors for giving valuable comments and suggestions.

Footnotes

This work is supported by the National Natural Science Foundation of China (Grant No. 12071097) and the Natural Science Foundation for The Excellent Youth Scholars of the Heilongjiang Province (Grant No. YQ2022A002).

References

Alon, N., The Shannon capacity of a union . Combinatorica 18(1998), 301310.CrossRefGoogle Scholar
Ben-Israel, A. and Greville, T. N. E., Generalized inverses: Theory and applications, 2nd ed., Springer, New York, 2003.Google Scholar
Berman, A. and Zhang, X. D., On the spectral radius of graphs with cut vertices . J. Combin. Theory Ser. B 83(2001), 233240.CrossRefGoogle Scholar
Bohman, T., A limit theorem for the Shannon capacities of odd cycles I . Proc. Amer. Math. Soc. 131(2003), 35593569.CrossRefGoogle Scholar
Brouwer, A. E. and Haemers, W. H., Spectra of graphs, Springer, New York, 2012.CrossRefGoogle Scholar
Chung, F. R. K. and Yau, S. T., Discrete Green’s functions . J. Combin. Theory Ser. A 91(2000), 191214.CrossRefGoogle Scholar
Cvetković, D., Rowlinson, P., and Simić, S., An introduction to the theory of graph spectra, Cambridge University Press, Cambridge, 2010.Google Scholar
Delsarte, P., An algebraic approach to the association schemes of coding theory . Philips Res. Rep. Suppl. 10(1973), vi+97.Google Scholar
Ellis, D., Filmus, Y., and Friedgut, E., Triangle-intersecting families of graphs . J. Eur. Math. Soc. 14(2012), 841885.Google Scholar
Ellis, D., Friedgut, E., and Pilpel, H., Intersecting families of permutations . J. Amer. Math. Soc. 24(2011), 649682.CrossRefGoogle Scholar
Erdös, P., Ko, C., and Rado, R., Intersection theorems for systems of finite sets . Quart. J. Math. Oxford Ser. 12(1961), 313320.CrossRefGoogle Scholar
Frankl, P. and Wilson, R. M., The Erdös–Ko–Rado theorem for vector spaces . J. Combin. Theory Ser. A 43(1986), 228236.CrossRefGoogle Scholar
Friedgut, E., On the measure of intersecting families, uniqueness and stability . Combinatorica 28(2008), 503528.CrossRefGoogle Scholar
Godsil, C. D. and Newman, M. W., Eigenvalue bounds for independent sets . J. Combin. Theory Ser. B 98(2008), 721734.CrossRefGoogle Scholar
Godsil, C. D. and Royle, G., Algebraic graph theory, Springer, New York, 2001.CrossRefGoogle Scholar
Haemers, W. H., On some problems of Lovász concerning the Shannon capacity of a graph . IEEE Trans. Inform. Theory 25(1979), 231232.CrossRefGoogle Scholar
Haemers, W. H., Eigenvalue techniques in design and graph theory. Ph.D. thesis, Technical University Eindhoven, 1979, Mathematical Centre Tracts, 121, Amsterdam, 1980.Google Scholar
Harant, J. and Richter, S., A new eigenvalue bound for independent sets . Discrete Math. 338(2015), 17631765.CrossRefGoogle Scholar
Lovász, L., On the Shannon capacity of a graph . IEEE Trans. Inform. Theory 25(1979), 17.CrossRefGoogle Scholar
Lu, M., Liu, H., and Tian, F., Laplacian spectral bounds for clique and independence numbers of graphs . J. Combin. Theory Ser. B 97(2007), 726732.CrossRefGoogle Scholar
Polak, S. C. and Schrijver, A., New lower bound on the Shannon capacity of ${C}_7$ from circular graphs. Inform. Process. Lett. 143(2019), 3740.CrossRefGoogle Scholar
Schrijver, A., A comparison of the Delsarte and Lovász bounds . IEEE Trans. Inform. Theory 25(1979), 425429.CrossRefGoogle Scholar
Shannon, C. E., The zero-error capacity of a noisy channel . IRE Trans. Inform. Theory 2(1956), 819.CrossRefGoogle Scholar
Spielman, D. A., Graphs, vectors, and matrices . Bull. Amer. Math. Soc. 54(2017), 4561.CrossRefGoogle Scholar
Zhou, J. and Bu, C., Eigenvalues and clique partitions of graphs . Adv. Appl. Math. 129(2021), 102220.CrossRefGoogle Scholar
Figure 0

Figure 1: Graph G.