In the process of reading the input, we can maintain two variables , which respectively indicate that the first team currently scored points, and the second team currently scored points. After reading a line, update the values of these two variables, and then determine whether the current round has ended
The process of judgment is as follows
1. If and , then the first team wins
2. If and , then the second team wins
3. Otherwise, the current round is not over yet
If a team wins two games, then exit the loop directly and output which team wins the overall game
Simply enumerate all possible decompositions.
In this case, it is impossible to form a 'Chow' as there are only different kinds of tiles. Therefore, only 'Pong' can be used. At this time, a hand of tiles is legal if and only if the number of tiles with each symbol is a multiple of , the number of tiles required by a 'Pong'.
The time complexity of this algorithm is .
It's easy to see that for the smallest available tile, matching 'Pong' is always better than matching 'Chow'. So the legality of a hand can be checked by enumerating from the first tile to the th tile.
Each time when we visit the th tile, let be the number of remaining tiles with the th symbol. We first use them to form 'Pongs'. In order to use up the remaining tiles with the th symbol, we have to use tiles of each synbol in range to form 'Chows'. If there are not enough tiles, the current hand must be illegal.
The time complexity of this procedure . This algorithm can be further improved to by differencing array .
Consider performing a state pressure enumeration for all individuals, enumerating which people are selected by us, and then calculating separately to determine the answer, time complexity
First we will sort the data in descending order from largest to smallest. For each prefix, we will calculate the corresponding numbers and . Let be the sum of the data, and to represent the sum of the data in the prefix , by enumerating , we can get , , then find the maximum value of for all . As for sorting, you can use sorting algorithms such as bubble sort, selection sort, and insertion sort to achieve a time complexity of
You can use algorithms such as merge sort, or use the built-in to achieve a time complexity of .
It is easy to find that the final result has nothing to do with the order of operations. You can enumerate whether each is reversed, and then judge whether it is legal one by one.
Time complexity .
It is easy to find that must be equal to , and must be equal to . Consider coloring each position by the number of operations parity:
Among them, red represents the grid on the diagonal, and the purple box on the diagonal represents , blue represents an odd number of operations, and purple represents an even number of operations. It can be seen that the number of the matrix in the purple area, white area, and red area must be .
That is, if , , then and .
Consider constructing an undirected graph with points, and the edge if and only if . It is easy to find that there is no edge in the point set corresponding to , and there is no edge in the point set corresponding to . That is, must be a bipartite graph at this time.
It is easy to prove that is a bipartite graph is also a sufficient condition for a solution.
Assuming that the corresponding point set of is a black point, then the construction scheme is equivalent to a black-and-white coloring scheme of the given graph, so that the lexicographical order of the black point set is as small as possible under the premise that the number of black points is as small as possible.
It is easy to prove that among the two coloring schemes for each connected block, the scheme with the smallest possible lexicographical order of the set of black points must be better on the premise of greedily taking the number of black points as small as possible.
Finally, all the black dot labels can be sorted and output.
Time complexity .
Output the bitwise OR of all numbers.
We can design a dp algorithm:
Denote as the value and minimum value of splitting the first into several substrings, obviously , where represents the value of the substring .
Brute force can do it in .
Then we find a fact: for an , there are only different .Because if the bitwise OR's value is changed with increasing, there must be a bit which is changed from 0 to 1.
Then we can find intervals for a fixed , satisfying , so that for each interval satisfies , and its is different for positions in different intervals
At the same time, for each interval, the minimum value of in the interval can be maintained, then violently process all intervals with different and the minimum value of , time complexity , can pass all data.
Simply enumerate edge cutting schemes, and find the centroid for each connected block.
For each node , set as the root of the whole tree and use a tree DP to calculate the probability that is the centroid of its component. For each node , let be the probability that the subtree of node when the root is includes exactly nodes. Array can be calculated in time, and then the probability can be calculated by enumerating the number of nodes in the same connected component as node .
The total time complexity is .
The point becomes the centroid if and only if its interval satisfies or , the number of solutions can be directly calculated. Complexity .
Let us consider the probability that the th node is not a centroid. Node is not a centroid if and only if there is a node that (1) the edge between and is preserved, and (2) after edge is deleted, the number of nodes belongs to the same component as node is strictly smaller than that of node .
Choose an arbitrary node as the root of the whole tree. Let be the probability that exactly th node in the subtree of node belonged to the connected component of node after cutting off the edges. can be calculated in by a simple tree DP.
Now, for each node , for each node that is a neighbor of , and for each integer , we need to calculate the probability that when is set as the root of the whole tree, (1) and belongs to the same connected component, and (2) the number of nodes in the subtree of that belongs to the connected component of node is larger than a half that of node .
Such a probability can be calculated from the array defined as the following. For node where is an edge on the tree and integer , represents the probability that after setting node as the root, the number of nodes in the component of that are not in the subtree of node is exactly .
Though can be calculated by a tree DP, a trivial algorithm costs times. An important trick here is that contributes to the result only when the number of nodes belongs to both the component of and the subtree of is strictly larger (or strictly smaller) than , and thus the value of can be merged to when .
Using this property, it can be proven that the time complexity of the tree DP is improved to .
For each query, multiply all the polynomials in the query interval violently, and directly judge whether the obtained polynomial is a palindrome polynomial. The time complexity is .
Consider accelerating the polynomial multiplication process of Subtask 1, then divide and conquer + NTT can be solved in the time complexity of .
It can be found that when each position is a linear function, we can find that only the linear function that itself is a palindrome polynomial and a first-order function whose solution is the reciprocal of each other can be a palindrome polynomial. We can consider using this property to maintain the set of each primary function of this interval and the set of each primary function of the reciprocal solution by using the set hash, and check whether they are the same.
There may be two methods of this package. One is to brute force each polynomial to turn it into a function and then use the previous package to do it, which may not be particularly easy to implement.
Or it is realized by a violent method accidentally discovered by the questioner, that is, violent maintenance of several items before and after the multiplication of polynomials. After violent verification, it is correct, and it is left to the reader to prove it by myself.
When a degree polynomial satisfies , we can find that , that is, we actually only need to judge whether and are the same.
Quickly determine whether two high-order polynomials are the same. If violence is used, it is related to the degree, and the complexity is unacceptable. We use polynomial hashing here.
Let the hash function This hash function can be easily maintained by using a segment tree, and we consider proving its correctness.
If the hash functions of two polynomials of the same degree are the same, it must satisfy We consider that when and are different, the polynomial is at most a polynomial of degree when the modulus is a prime number, with at most zeros. If is taken randomly, then the probability that is is . The probability that two polynomials are different but the hash function is the same is .
So we set to be a prime number at the level of , the probability of a single query success is , and the probability of success is is about , so a single hash cannot meet our requirements, a double hash is needed, and the probability of successful times is , about .
So far, the problem can be solved within the complexity of
Violence takes out the chain of each query, and then simulates it.
Time complexity .
The brute force takes out the chain of each query, and uses the hash to determine whether each prefix is a palindrome.
Time complexity .
This part is mainly divided into various root sign practices and practices with larger constants.
One root approach is to use a tree-move and then use a two-way PAM to solve. Time complexity , where represents the character set size.
Consider making all queries off-line, sorted by left endpoint position. PAM(Palindrome tree) is used to maintain the palindrome tree and the depth of each node during the scan from right to left. For each query, use tree multiplication to find the node corresponding to the first palindrome prefix, and its depth on the palindrome tree is the answer.
Time complexity .
First make all queries off-line, and then divide and conquer. For each node, the center processes all queries that pass through the node in its connected block.
For the query , let the subtree where is located be , and the subtree where is located be , and discuss according to the position of the palindrome:
The red line segment is the palindrome prefix position, and the blue line is the palindrome center.
Let denote the string from root to .
The entire palindrome is in , which is equivalent to the number of suffixed palindrome substrings of . You can pass dfs and maintain revocable PAM statistics, where the number of suffixed palindrome substrings is the depth of the node in the fail tree of PAM.
The total complexity of this part is .
The palindrome reaches the subtree passing the root, and the center of the palindrome remains in the subtree.
According to the properties of the palindrome, the string can be divided into three parts . where is the prefix of and is the suffix of . Satisfy is a palindrome and .
Lemma 1: If and are both palindromes, then must be a palindrome.
Lemma 2: All palindromic prefixes of a string can be divided into arithmetic progressions.
We use Lemma 2 to decompose into a palindrome prefix. For each arithmetic sequence, it is expressed in the form of . ( represents the string formed by repeating the string by , and represents the string formed by splicing the strings and before and after.)
Consider the case of , at this time is divided into , we might as well decompose into form, where :
must be represented as . According to Lemma 1, cannot be a prefix of and . At this time if:
For palindrome decomposition, PAM of Case 1 can be used to find the deepest ancestor that does not satisfy the arithmetic progression for each node. For computing the decomposition of , you can maintain a hash of and then multiply the computation.
The total complexity of this part is .
The palindrome is centered in the subtree. Similar to Case 2, we decompose the palindrome into , at this time .
Similarly, we decompose into the form of , and decompose into the form of , and down s takes out .
Similarly, categorize whether is a prefix of , and if not, check for unique . Otherwise the result is the difference between the feasible powers of the two sides.
The complexity of this part is .
Total complexity .