Блог пользователя rng_58

Автор rng_58, история, 5 лет назад, По-английски

Sorry for the late announcement and unusual time. It was hard to find a good slot (some local marathon contest and SRM on Saturday, Open Cup on Sunday, sponsored ARC in the next week, TCO after that, etc.)

We hope to hold two more AGCs in this year (excluding this AGC), and guarantee at least one.

We will hold AtCoder Grand Contest 040. This contest counts for GP30 scores.

The point values will be 300 — 600 — 800 — 1100 — 1400 — 2200.

We are looking forward to your participation!

  • Проголосовать: нравится
  • +166
  • Проголосовать: не нравится

»
5 лет назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

I have to say I really love this start time. US time is changing one hour eaRlier soon, which makes usual atcoder start time to be 7am ET...

»
5 лет назад, # |
  Проголосовать: нравится +4 Проголосовать: не нравится

Auto comment: topic has been updated by rng_58 (previous revision, new revision, compare).

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

This contest will have editorial?

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

How to solve C? I only found that a sequence can be eliminated iff $$$-(cou[AC] + cou[CB] + cou[CC]) \leq cou[AB] - cou[BA] \leq (cou[BC] + cou[CA] + cou[CC])$$$

This is since we can first select for every $$$C$$$ whether it is a $$$A$$$ or a $$$B$$$, then eliminate the string containing only $$$A$$$'s and $$$B$$$'s. This is possible iff $$$cou[AB] = cou[BA]$$$.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +38 Проголосовать: не нравится

    Change all odd A into B and odd B into A. Now you can't delete AA or BB. Any string with at most $$$n/2$$$ As and Bs is good.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +13 Проголосовать: не нравится

    Typical trick of inverting character in an odd position. I failed on such problemsetting trick for 5+ times. Why I don't change...

  • »
    »
    5 лет назад, # ^ |
    Rev. 4   Проголосовать: нравится +20 Проголосовать: не нравится

    For any sequnce, build a bipartite graph where left half is odd positions and right half is even positions. Connect two positions if they are not $$$AB$$$ or $$$BA$$$. My guess (Sorry, no proof yet) is that the sequence can be eliminated iff the graph have a perfect matching.

    Then because of Hall's theorem, we have something like $$$cnt[A] + cnt[C] >= cnt[A']$$$, $$$cnt[A'] + cnt[C'] >= cnt[A]$$$, $$$cnt[B] + cnt[C] >= cnt[B']$$$ and $$$cnt[B'] + cnt[C'] >= cnt[B]$$$. $$$A, B, C$$$ are for the left positions; $$$A',B',C'$$$ for the even. Then the problem becomes a much more straightforward one.

    P.S. Oh god, this sounds an overkill.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      For the string $$$ABAABA$$$, $$$\{ 0-3, 1-4, 2-5 \}$$$ is a perfect matching but it is not possible to eliminate the string by pairing those positions.

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        I can iteratively remove the two middle elements, right?

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          Sure, but then you're pairing different positions ($$$\{ 0-5, 1-4, 2-3 \}$$$).

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
              Проголосовать: нравится +10 Проголосовать: не нравится

            Yeah. There is no one-to-one correspondence from matching to elimination order. I think I am just accidentally right on my guess because of the special property of this problem.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    My approach seems different to the editorial.

    I try to count the number of non-emptiable strings. If $$$s$$$ is emptiable, then there is a way that after replacing every $$$C$$$ to either $$$A$$$ or $$$B$$$, $$$s$$$ becomes a valid braket if we regard $$$AB$$$ as the left (resp. right) braket and $$$BA$$$ as the right (resp. left) braket. We see that $$$A$$$ and $$$B$$$ are equivalent (or swappable), the answer will be

    $$$ 3^n - 2*(\text{the number of non-emptiable strings if we regard $$$AB$$$ as the left braket}). $$$

    Now, if $$$AB$$$ is regarded as a left braket (+1), then $$$BA, CA, CC, BC$$$ (4 items) are regarded as right brakets (-1), $$$AA, BB, AC, CB$$$ (4 items) are regarded as nothing (or empty brakets) (0). Now consider the generating function $$$f(x) = x+4+4x^{-1}$$$, then the number of non-emptiable strings if we regard $$$AB$$$ as the left braket is the sum of coefficients for all $$$x^k (k > 0)$$$ in $$$(f(x))^{n/2}$$$.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I'm counting non-emptiable strings with a different condition: let $$$A_1$$$ be the number of 'A' at odd positions and $$$B_0$$$ the number of 'B' at even positions, then if $$$A_1+B_0 > N/2$$$, the string is non-emptiable; the same holds if 'A' and 'B' are flipped, but that gives the same number of cases. Why are they non-emptiable? The number of 'A' or 'C' at even positions is $$$N/2-B_0 < A_1$$$ and each 'A' at an odd position has to be paired with one of them, which is impossible.

      I haven't proved why all other strings are emptiable, but it makes sense intuitively — there will always be a pair 'AA', 'BB' or a suitable removal with 'C'.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Is C — sum(0 to n/2)ncr(n/2,i)*2^(2i)*3^(n/2-i)?

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I think the editorial for A is slightly wrong.

It says "Thus, the answer is the sum of these lower limits". I think it should be the max, right?

»
5 лет назад, # |
  Проголосовать: нравится +58 Проголосовать: не нравится

I just wanted to say that problems are of amazing quality O_O

maroonrk, how do you even do this every time?

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

What the hell is wrong with my solution for D? ...

My idea is that when the planks are split into those where Snuke and Ringo are getting closer ("near") and where they're getting farther away from each other ("far"), then we want some "far" planks at the beginning, then some "near" planks and finally the remaining "far" planks. These last "far" planks can be ignored — if Ringo is on them, there's no way to reach him. Now, if Snuke can catch Ringo, then Snuke can reach the end of the sequence of "near" planks faster, so we want to find a point $$$x$$$ such that the time Ringo takes to reach the end of the "near" planks from this point is greater than the sum of $$$A_i$$$ of all "near" planks and all "far" planks put at the beginning; the probability is $$$x/N$$$.

Then, if $$$x$$$ is on the "far" planks, we can throw out a "far" plank and $$$x$$$ won't decrease. Therefore, $$$x$$$ only depends on $$$B_i$$$ of "near" planks, not "far" planks, and for a fixed number of "far" planks at the beginning, we want the sum of $$$A_i$$$ to be as small as possible, so we should sort "far" planks by $$$A_i$$$ and consider only the prefixes of this sequence. On the other hand, we should sort "near" planks by $$$B_i$$$ (their $$$A_i$$$ only contribute to the time mentioned above) and $$$x$$$ is the (number of "near" planks + "far" planks at the beginning) minus $$$(k + d)$$$, where the time we want to reach is the sum of $$$B_i$$$ of the last $$$k$$$ "near" planks plus $$$d \cdot B$$$ for the last $$$k-1$$$-th "near" plank.

It gets WA. I don't see any mistake in the reasoning.

Anyway, very nice problems. I didn't get to reading E/F at all, but B-D were excellent.

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится +1 Проголосовать: не нравится

can someone please proof why only this kinds of solution is required in B?

"we can solve it by actually sorting the indices in this way and try all candidates of the solution in this form one by one."

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится +2 Проголосовать: не нравится

For B I got AC but still, I am unable to prove correctness of my solution.(Even I don't know if the approach is correct or not).

sort the pairs based on l and then on r

initially maintain p= a[0] and q=a[1] and ans=total number of happy participant using only 2 problems

then for each pair in the array, see which is better

p=merge(p,q) q=a

p=merge(p,a) q=unchanged

q=merge(q,a) p=unchanged

calculate participants for all of the above cases and then make the best possible assignment.

Can anyone tell me if this approach may fail on certain test case. If it doesn't then please can someone help in deriving proof of correctness.

I submitted and got AC just based on intuition.

Thanks.

»
5 лет назад, # |
  Проголосовать: нравится +37 Проголосовать: не нравится

Thank you for your participation!

You may have noticed "Sample" data set did not contain any test during the contest.(Though sample tests are included in "All" set.) It was a misconfiguration and now fixed.

Sorry for the inconvenience, but I believe the effect is negligibly small.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Hello, I'm passing all test cases for problem B except for 2, I can't figure out what's wrong with my code, can anyone help? Here's my submission: https://atcoder.jp/contests/agc040/submissions/8280444

»
5 лет назад, # |
  Проголосовать: нравится +28 Проголосовать: не нравится

Thanks for the amazing problems indeed!

My screencast with commentary in Russian (by Errichto's advice): https://youtu.be/_k7ADEJGrTM

»
5 лет назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

I enjoyed B very much, also the solution for C explained in the editorial is very beautiful.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can someone find errors in my logic for D ? I assume that Snuke can win if he catch up or pass Ringo at the end of some plank so I need to find the set of all the planks to the left of that plank and that plank itself. It is alway optimal to arrange set of planks by increasing order of $$$B_i$$$ to get the position $$$x$$$ that Snuke will win if Ringo start at some positon $$$<= x$$$. I can observe that is alway better to add some planks with $$$B_i >= A_i$$$ to the set and if some planks with $$$A_i > B_i$$$ locate outside range $$$[0...x]$$$ then we can remove it to get better answer. Then my solution is to put all planks with $$$B_i >= A_i$$$ to the set and iterate throught all the other planks $$$C$$$, consider scenario where that planks $$$C$$$ is the maximal $$$B_i$$$ planks with $$$B_i < A_i$$$ and add other planks with non-greater $$$B_i$$$ such that the $$$C$$$ doesn't locate outside range $$$[0...x]$$$. I repeatly take the planks with minimum $$$A_i$$$ among such planks when it is takeable. Of course I consider when there no such planks that $$$B_i < A_i$$$. My solution got WA : 8301236

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Generate some random cases and compare with some AC solution. I found a lot of mistakes in my thinking that way.

»
4 года назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

Can someone help me with the editorial for the B problem. I can't completely feel the intuition behind this statement:
During the k-th breadth-first search, we will visit only the seats i such that Hk(i) < Hk−1(i), hence the total number of seats visited in the N^2 steps (for 1 ≤ k ≤ N^2) is O(N^3)
Is the idea like, during every breath first search due to the above constraint, the no of nodes visited is O(N) instead of O(N^2), thus making the final complexity O(N^3) ? If yes, I am not able to completely feel that O(N) part.

  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    Are you sure it's the AtCoder Grand Contest 40's problem B you're talking about? I saw the editorial for the AGC40 B, it has some nlogn solution in it, no bfs etc.

    • »
      »
      »
      4 года назад, # ^ |
        Проголосовать: нравится +5 Проголосовать: не нравится

      Oops, thanks for finding out. It was such a blunder from my side. It was supposed to be Atcoder grand 44 contest :)

      • »
        »
        »
        »
        4 года назад, # ^ |
        Rev. 8   Проголосовать: нравится +3 Проголосовать: не нравится

        I read the editorial and couldn't prove the claim and was quite perplexed for some time, but now I think I have some idea of why what they are saying is true:

        This is the grid, below:                              
        
        
                                       . . L . . . . . .
        
                                       . . . . . . . . .
        
                                       . . . . . . . . .
        
                                       . . . . . . . . .
        
                                       . . . . . . . . .

        all the dots are occupied places, and the person 'L' is the one leaving this turn. And everyone has already computed the minimum people they must cross to escape.

        When L leaves, the two people on the left of L don't need to care (I mean their minimum distances won't change) because they already can exit in 0 and 1 moves respectively and the disappearance of L won't give them a better path suddenly.

        However the node just to the right of L gains some benefit---> it had to cross 3 people earlier but now, it only needs to cross 2 people if it heads towards the left. So we update its min_cost/hate, but now, the neighbour of this node is also affected because this node (the one to the right to L) was just updated.

        https://atcoder.jp/contests/AGC044/submissions/13531558

        The solution given here (from the editorial) is such that it only updates those nodes that need to be updated. Hence its complexity is O(updates)

        Now, we notice the final part---->

        Imagine we have a node A and some other node L which has just left, then A's distance will only be updated if L was in the way on some optimal path starting from A. As then only the haters(A) will become haters(A) — 1.

        So, we may look at our number of updates in another way: Every node out of O(N^2) nodes has a kind of linear O(N) optimal path. An update is when this optimal path length decreases by 1 and since the path length is at most N, hence the min_haters of a particular person will at most be changed N times.

        So for Each of N^2 people we have at most N updates to their values, hence N^3 is the complexity.

        • »
          »
          »
          »
          »
          4 года назад, # ^ |
            Проголосовать: нравится +5 Проголосовать: не нравится

          Thank you for the response, I really appreciate the long and elaborate explanation. The way which you have mentioned in the end, is more intuitive. I guess who ever solved in contest time should have also used the same intuition, else I guess it is too difficult to draw and find the pattern to converge to O(N). And this same has been explained in this comment.
          Thank you once again :)