- 01 Mar, 2018 1 commit
-
-
Mayra Cabrera authored
-
- 28 Feb, 2018 1 commit
-
-
Dylan Griffith authored
-
- 23 Feb, 2018 1 commit
-
-
Nick Thomas authored
-
- 22 Feb, 2018 1 commit
-
-
Yorick Peterse authored
This optimises searching for users when using queries consisting out of one or two characters such as "ab". We optimise such cases by searching for `LOWER(name)` and `LOWER(username)` instead of using `ILIKE`. Using `LOWER` produces a _much_ better performing query. For example, when searching for all users matching the term "a" we'd produce the following plan: Limit (cost=637.69..637.74 rows=20 width=805) (actual time=41.983..41.995 rows=20 loops=1) Buffers: shared hit=8330 -> Sort (cost=637.69..638.61 rows=368 width=805) (actual time=41.982..41.990 rows=20 loops=1) Sort Key: (CASE WHEN ((name)::text = 'a'::text) THEN 0 WHEN ((username)::text = 'a'::text) THEN 1 WHEN ((email)::text = 'a'::text) THEN 2 ELSE 3 END), name Sort Method: top-N heapsort Memory: 35kB Buffers: shared hit=8330 -> Bitmap Heap Scan on users (cost=75.47..627.89 rows=368 width=805) (actual time=9.452..41.305 rows=277 loops=1) Recheck Cond: (((name)::text ~~* 'a'::text) OR ((username)::text ~~* 'a'::text) OR ((email)::text = 'a'::text)) Rows Removed by Index Recheck: 7601 Heap Blocks: exact=7636 Buffers: shared hit=8327 -> BitmapOr (cost=75.47..75.47 rows=368 width=0) (actual time=8.290..8.290 rows=0 loops=1) Buffers: shared hit=691 -> Bitmap Index Scan on index_users_on_name_trigram (cost=0.00..38.85 rows=180 width=0) (actual time=4.369..4.369 rows=4071 loops=1) Index Cond: ((name)::text ~~* 'a'::text) Buffers: shared hit=360 -> Bitmap Index Scan on index_users_on_username_trigram (cost=0.00..34.41 rows=188 width=0) (actual time=3.896..3.896 rows=4140 loops=1) Index Cond: ((username)::text ~~* 'a'::text) Buffers: shared hit=328 -> Bitmap Index Scan on users_email_key (cost=0.00..1.94 rows=1 width=0) (actual time=0.022..0.022 rows=0 loops=1) Index Cond: ((email)::text = 'a'::text) Buffers: shared hit=3 Planning time: 3.912 ms Execution time: 42.171 ms With the changes in this commit we now produce the following plan instead: Limit (cost=13257.48..13257.53 rows=20 width=805) (actual time=1.567..1.579 rows=20 loops=1) Buffers: shared hit=287 -> Sort (cost=13257.48..13280.93 rows=9379 width=805) (actual time=1.567..1.572 rows=20 loops=1) Sort Key: (CASE WHEN ((name)::text = 'a'::text) THEN 0 WHEN ((username)::text = 'a'::text) THEN 1 WHEN ((email)::text = 'a'::text) THEN 2 ELSE 3 END), name Sort Method: top-N heapsort Memory: 35kB Buffers: shared hit=287 -> Bitmap Heap Scan on users (cost=135.66..13007.91 rows=9379 width=805) (actual time=0.194..1.107 rows=277 loops=1) Recheck Cond: ((lower((name)::text) = 'a'::text) OR (lower((username)::text) = 'a'::text) OR ((email)::text = 'a'::text)) Heap Blocks: exact=277 Buffers: shared hit=287 -> BitmapOr (cost=135.66..135.66 rows=9379 width=0) (actual time=0.152..0.152 rows=0 loops=1) Buffers: shared hit=10 -> Bitmap Index Scan on yorick_test_users (cost=0.00..124.75 rows=9377 width=0) (actual time=0.101..0.101 rows=277 loops=1) Index Cond: (lower((name)::text) = 'a'::text) Buffers: shared hit=4 -> Bitmap Index Scan on index_on_users_lower_username (cost=0.00..1.94 rows=1 width=0) (actual time=0.035..0.035 rows=1 loops=1) Index Cond: (lower((username)::text) = 'a'::text) Buffers: shared hit=3 -> Bitmap Index Scan on users_email_key (cost=0.00..1.94 rows=1 width=0) (actual time=0.014..0.014 rows=0 loops=1) Index Cond: ((email)::text = 'a'::text) Buffers: shared hit=3 Planning time: 0.303 ms Execution time: 1.687 ms Here we can see the new query is 25 times faster compared to the old query.
-
- 20 Feb, 2018 3 commits
-
-
Andreas Brandl authored
-
Andreas Brandl authored
This helps with queries that get project ids based on the - comparably rare - visibility levels 10 and 20. For these, postgres can now leverage the partial index for a index-only scan to improve performance. Example queries: SELECT id FROM projects WHERE visibility_level IN (10,20) SELECT id FROM projects WHERE visibility_level IN (10) For MySQL, this results in a full index on id because MySQL omits the WHERE clause. That is, the index is a duplicate of the primary key basically.
-
Dylan Griffith authored
-
- 16 Feb, 2018 2 commits
- 15 Feb, 2018 1 commit
-
-
Dylan Griffith authored
-
- 12 Feb, 2018 1 commit
-
-
Michael Kozono authored
To finish migrating untracked files to uploads for installations that were affected by https://gitlab.com/gitlab-org/gitlab-ce/issues/42881 Or just to delete the temp table if it is empty and left behind.
-
- 08 Feb, 2018 2 commits
-
-
Greg Stark authored
Artifacts are in the middle of being migrated from ci_builds to ci_job_artifacts. The expiration date is currently visible in both of these tables and the test for whether an expired artifact is present for a job is complex as it requires checking both the of the tables. Add two new indexes, one on ci_builds.artifacts_expire_at and one on ci_job_artifacts.expire_at to enable finding expired artifacts efficiently. And until the migration is finished, replace the SQL for finding expired and non-expired artifacts with a hand-crafted UNION ALL based query instead of using OR. This overcomes a database optimizer limitation that prevents it from using these indexes. When the migration is finished the next version should remove this query and replace it with a much simpler query on just ci_job_artifacts. See https://gitlab.com/gitlab-org/gitlab-ce/issues/42561 for followup.
-
Bob Van Landuyt authored
Since populating the fork networks was scheduled multiple times because of bugs that needed to be fixed: - https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/15595/ Creating fork networks for projects that were missing the root of the fork network. - https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/15709 The API allowed creating forked_project_links without fork_network_members. Scheduling this migration multiple times would case it to run concurrently. Which in turn would try to insert the same data into `fork_network_members` causing duplicate key errors. This avoids running the migration multiple times.
-
- 07 Feb, 2018 1 commit
-
-
Rubén Dávila authored
-
- 06 Feb, 2018 1 commit
-
-
Douwe Maan authored
-
- 05 Feb, 2018 2 commits
-
-
Andreas Brandl authored
Fixes #32282.
-
Yorick Peterse authored
EE seems to have had an outdated schema at some point, leading to some environments not having the right columns in place. This adjusts the migration for `issues.closed_at` so it takes care of those cases, ensuring data can be migrated properly. Fixes https://gitlab.com/gitlab-org/gitlab-ee/issues/4803
-
- 02 Feb, 2018 6 commits
-
-
Matija Čupić authored
-
Matija Čupić authored
-
Matija Čupić authored
-
Micaël Bergeron authored
-
Andreas Brandl authored
-
Matija Čupić authored
-
- 01 Feb, 2018 4 commits
-
-
Micaël Bergeron authored
-
Micaël Bergeron authored
-
Micaël Bergeron authored
-
Yorick Peterse authored
In the event of Sidekiq jobs getting lost there may be some rows left to migrate. This migration ensures any remaining jobs are completed and that all data has been migrated. This fixes https://gitlab.com/gitlab-org/gitlab-ce/issues/41595
-
- 26 Jan, 2018 1 commit
-
-
Matija Čupić authored
-
- 23 Jan, 2018 1 commit
-
-
Jan Provaznik authored
Search query is especially slow if a user searches a generic string which matches many records, in such case search can take tens of seconds or time out. To speed up the search query, we search only for first 1000 records, if there is >1000 matching records we just display "1000+" instead of precise total count supposing that with such amount the exact count is not so important for the user. Because for issues even limited search was not fast enough, 2-phase approach is used for issues: first we use simpler/faster query to get all public issues, if this exceeds the limit, we just return the limit. If the amount of matching results is lower than limit, we re-run more complex search query (which includes also confidential issues). Re-running the complex query should be fast enough in such case because the amount of matching issues is lower than limit. Because exact total_count is now limited, this patch also switches to to "prev/next" pagination. Related #40540
-
- 22 Jan, 2018 1 commit
-
-
Matija Čupić authored
-
- 18 Jan, 2018 2 commits
-
-
Greg Stark authored
-
Rémy Coutable authored
Get rid of a Rails 5 deprecation warning in db/migrate/20170425112128_create_pipeline_schedules_table.rb Signed-off-by: Rémy Coutable <remy@rymai.me>
-
- 17 Jan, 2018 2 commits
-
-
Francisco Javier López authored
-
Douwe Maan authored
[10.3] Migrate `can_push` column from `keys` to `deploy_keys_project` See merge request gitlab/gitlabhq!2276 (cherry picked from commit f6ca52d31bac350a23938e0aebf717c767b4710c) 1f2bd3c0 Backport to 10.3
-
- 15 Jan, 2018 1 commit
-
-
Drew Blessing authored
Previously, the last push widget would only show when the branch never had a merge request associated with it - even merged or closed ones. Now the widget will disregard merge requests that are merged or closed.
-
- 12 Jan, 2018 1 commit
-
-
Hiroyuki Sato authored
-
- 11 Jan, 2018 1 commit
-
-
🙈 jacopo beschi 🙉 authored
-
- 10 Jan, 2018 1 commit
-
-
Jan Provaznik authored
For each MR diff an extra 'SELECT COUNT()' is executed to get number of commits for the diff. Overall time to get counts for all MR diffs may be quite expensive. To speed up loading of MR info, information about number of commits is stored in a MR diff's extra column. Closes #38068
-
- 08 Jan, 2018 2 commits
-
-
Paco Guzman authored
-
Michael Kozono authored
Originally from branch 'fix-authorized-keys-enabled-default-2738' via merge request https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/2240 Removed background migrations which were intended to fix state after using Gitlab without a default having been set Squashed commits: Locally, if Spring was not restarted, `current_application_settings` was still cached, which prevented the migration from editing the file. This will also ensure that any app server somehow hitting old cache data will properly default this setting regardless. Retroactively fix migration This allows us to identify customers who ran the broken migration. Their `authorized_keys_enabled` column does not have a default at this point. We will fix the column after we fix the `authorized_keys` file. Fix authorized_keys file if needed Add default to authorized_keys_enabled setting Reminder: The original migration was fixed retroactively a few commits ago, so people who did not ever run GitLab 9.3.0 already have a column that defaults to true and disallows nulls. I have tested on PostgreSQL and MySQL that it is safe to run this migration regardless. Affected customers who did run 9.3.0 are the ones who need this migration to fix the authorized_keys_enabled column. The reason for the retroactive fix plus this migration is that it allows us to run a migration in between to fix the authorized_keys file only for those who ran 9.3.0. Tweaks to address feedback Extract work into background migration Move batch-add-logic to background migration Do the work synchronously to avoid multiple workers attempting to add batches of keys at the same time. Also, make the delete portion wait until after adding is done. Do read and delete work in background migration Fix Rubocop offenses Add changelog entry Inform the user of actions taken or not taken Prevent unnecessary `select`s and `remove_key`s Add logs for action taken Fix optimization Reuse `Gitlab::ShellAdapter` Guarantee the earliest key Fix migration spec for MySQL
-