BigW Consortium Gitlab

  1. 24 Mar, 2016 2 commits
  2. 17 Mar, 2016 1 commit
  3. 04 Feb, 2016 1 commit
  4. 27 Jan, 2016 1 commit
    • Use Atom update times of the first event · de7c9c7a
      Yorick Peterse authored
      By simply loading the first event from the already sorted set we save
      ourselves extra (slow) queries just to get the latest update timestamp.
      This removes the need for Event.latest_update_time and significantly
      reduces the time needed to build an Atom feed.
      
      Fixes gitlab-org/gitlab-ce#12415
  5. 26 Jan, 2016 1 commit
  6. 02 Dec, 2015 1 commit
  7. 18 Nov, 2015 2 commits
    • Added Event.limit_recent · 01620dd7
      Yorick Peterse authored
      This will be used to move some querying logic from the users controller
      to the Event model (where it belongs).
    • Faster way of obtaining latest event update time · 054f2f98
      Yorick Peterse authored
      Instead of using MAX(events.updated_at) we can simply sort the events in
      descending order by the "id" column and grab the first row. In other
      words, instead of this:
      
          SELECT max(events.updated_at) AS max_id
          FROM events
          LEFT OUTER JOIN projects   ON projects.id   = events.project_id
          LEFT OUTER JOIN namespaces ON namespaces.id = projects.namespace_id
          WHERE events.author_id IS NOT NULL
          AND events.project_id IN (13083);
      
      we can use this:
      
          SELECT events.updated_at AS max_id
          FROM events
          LEFT OUTER JOIN projects   ON projects.id   = events.project_id
          LEFT OUTER JOIN namespaces ON namespaces.id = projects.namespace_id
          WHERE events.author_id IS NOT NULL
          AND events.project_id IN (13083)
          ORDER BY events.id DESC
          LIMIT 1;
      
      This has the benefit that on PostgreSQL a backwards index scan can be
      used, which due to the "LIMIT 1" will at most process only a single row.
      This in turn greatly speeds up the process of grabbing the latest update
      time. This can be confirmed by looking at the query plans. The first
      query produces the following plan:
      
          Aggregate  (cost=43779.84..43779.85 rows=1 width=12) (actual time=2142.462..2142.462 rows=1 loops=1)
            ->  Index Scan using index_events_on_project_id on events  (cost=0.43..43704.69 rows=30060 width=12) (actual time=0.033..2138.086 rows=32769 loops=1)
                  Index Cond: (project_id = 13083)
                  Filter: (author_id IS NOT NULL)
          Planning time: 1.248 ms
          Execution time: 2142.548 ms
      
      The second query in turn produces the following plan:
      
          Limit  (cost=0.43..41.65 rows=1 width=16) (actual time=1.394..1.394 rows=1 loops=1)
            ->  Index Scan Backward using events_pkey on events  (cost=0.43..1238907.96 rows=30060 width=16) (actual time=1.394..1.394 rows=1 loops=1)
                  Filter: ((author_id IS NOT NULL) AND (project_id = 13083))
                  Rows Removed by Filter: 2104
          Planning time: 0.166 ms
          Execution time: 1.408 ms
      
      According to the above plans the 2nd query is around 1500 times faster.
      However, re-running the first query produces timings of around 80 ms,
      making the 2nd query "only" around 55 times faster.
  8. 11 Nov, 2015 1 commit
    • Change "recent" scopes to sort by "id" · 7eb502c0
      Yorick Peterse authored
      These scopes can just sort by the "id" column in descending order to
      achieve the same result. An added benefit is being able to perform a
      backwards index scan (depending on the rest of the final query) instead
      of having to actually sort data.
  9. 15 Sep, 2015 1 commit
  10. 02 Jul, 2015 1 commit
    • 'created_at DESC' is performed twice · 87ac5900
      catatsuy authored
      If you are already sorting in descending order in the created_at,
      it is run twice when you run the .recent.
      It has passed in the string 'created_at DESC'.
      Ruby on Rails is directly given to the SQL.
      It is a slow query in MySQL.
  11. 06 Apr, 2015 1 commit
  12. 22 Mar, 2015 1 commit
  13. 18 Mar, 2015 1 commit
  14. 10 Mar, 2015 1 commit
  15. 18 Feb, 2015 1 commit
  16. 13 Feb, 2015 3 commits
  17. 05 Feb, 2015 1 commit
  18. 29 Dec, 2014 1 commit
  19. 03 Nov, 2014 1 commit
  20. 10 Oct, 2014 1 commit
  21. 25 Jul, 2014 1 commit
  22. 26 Jun, 2014 2 commits
  23. 17 Jun, 2014 2 commits
  24. 13 Jun, 2014 1 commit
  25. 29 May, 2014 1 commit
  26. 09 Apr, 2014 1 commit
  27. 25 Mar, 2014 1 commit
  28. 24 Feb, 2014 1 commit
  29. 10 Dec, 2013 1 commit
  30. 25 Nov, 2013 1 commit
  31. 13 Nov, 2013 1 commit
  32. 20 Aug, 2013 1 commit
  33. 19 Aug, 2013 2 commits