- 03 Sep, 2017 7 commits
-
-
Shinya Maeda authored
-
Shinya Maeda authored
-
Shinya Maeda authored
-
Shinya Maeda authored
-
Shinya Maeda authored
-
Shinya Maeda authored
-
Shinya Maeda authored
-
- 30 Aug, 2017 1 commit
-
-
Nick Thomas authored
This is an amalgamation of: * Cory Hinshaw: Initial implementation !5552 * Rémy Coutable: Updates !9350 * Nick Thomas: Resolve conflicts and add ED25519 support !13712
-
- 25 Aug, 2017 2 commits
-
-
Gabriel Mazetto authored
There are some redundancies in the validation steps, and that is to preserve current error messages behavior Also few specs have to be changed in order to fix madness in validation logic.
-
Lin Jen-Shin authored
And specify owners more clearly
-
- 22 Aug, 2017 2 commits
-
-
Gabriel Mazetto authored
-
Gabriel Mazetto authored
-
- 16 Aug, 2017 1 commit
-
-
Grzegorz Bizon authored
-
- 10 Aug, 2017 3 commits
-
-
Yorick Peterse authored
This commit migrates events data in such a way that push events are stored much more efficiently. This is done by creating a shadow table called "events_for_migration", and a table called "push_event_payloads" which is used for storing push data of push events. The background migration in this commit will copy events from the "events" table into the "events_for_migration" table, push events in will also have a row created in "push_event_payloads". This approach allows us to reclaim space in the next release by simply swapping the "events" and "events_for_migration" tables, then dropping the old events (now "events_for_migration") table. The new table structure is also optimised for storage space, and does not include the unused "title" column nor the "data" column (since this data is moved to "push_event_payloads"). == Newly Created Events Newly created events are inserted into both "events" and "events_for_migration", both using the exact same primary key value. The table "push_event_payloads" in turn has a foreign key to the _shadow_ table. This removes the need for recreating and validating the foreign key after swapping the tables. Since the shadow table also has a foreign key to "projects.id" we also don't have to worry about orphaned rows. This approach however does require some additional storage as we're duplicating a portion of the events data for at least 1 release. The exact amount is hard to estimate, but for GitLab.com this is expected to be between 10 and 20 GB at most. The background migration in this commit deliberately does _not_ update the "events" table as doing so would put a lot of pressure on PostgreSQL's auto vacuuming system. == Supporting Both Old And New Events Application code has also been adjusted to support push events using both the old and new data formats. This is done by creating a PushEvent class which extends the regular Event class. Using Rails' Single Table Inheritance system we can ensure the right class is used for the right data, which in this case is based on the value of `events.action`. To support displaying old and new data at the same time the PushEvent class re-defines a few methods of the Event class, falling back to their original implementations for push events in the old format. Once all existing events have been migrated the various push event related methods can be removed from the Event model, and the calls to `super` can be removed from the methods in the PushEvent model. The UI and event atom feed have also been slightly changed to better handle this new setup, fortunately only a few changes were necessary to make this work. == API Changes The API only displays push data of events in the new format. Supporting both formats in the API is a bit more difficult compared to the UI. Since the old push data was not really well documented (apart from one example that used an incorrect "action" nmae) I decided that supporting both was not worth the effort, especially since events will be migrated in a few days _and_ new events are created in the correct format.
-
Lin Jen-Shin authored
so that we don't have to fetch it for non-forks
-
Rémy Coutable authored
Also improves the `create_templates` transient attribute and use `project.project_feature.update_columns` instead of `project.project_feature.update_attributes!` since it's faster. Signed-off-by: Rémy Coutable <remy@rymai.me>
-
- 09 Aug, 2017 2 commits
-
-
Lin Jen-Shin authored
-
Lin Jen-Shin authored
the project didn't have a repository setup. We don't try to stub it if the repository was already there.
-
- 08 Aug, 2017 1 commit
-
-
Robert Speicher authored
Because we assign this value in the model via a callback conditionally on `email_changed?`, this never gets set when using `build_stubbed`, resulting in a "can't be blank" validation error on this field. In this case, we can just assign it manually to the same value as `email`, which is generated via a sequence.
-
- 07 Aug, 2017 1 commit
-
-
Jarka Kadlecova authored
-
- 04 Aug, 2017 1 commit
-
-
Bob Van Landuyt authored
-
- 02 Aug, 2017 3 commits
-
-
Robert Speicher authored
-
Robert Speicher authored
-
Eric authored
-
- 01 Aug, 2017 2 commits
-
-
Robert Speicher authored
-
Gabriel Mazetto authored
-
- 31 Jul, 2017 2 commits
-
-
Sean McGivern authored
-
Sean McGivern authored
This was migrated to separate columns in 9.4, and now just needs to be removed for real.
-
- 28 Jul, 2017 3 commits
-
-
Yorick Peterse authored
Having two states that essentially mean the same thing is very much like having a boolean "true" and boolean "mostly-true": it's rather silly. This commit merges the "reopened" state into the "opened" state while taking care of system notes still showing messages along the lines of "Alice reopened this issue". A big benefit from having only two states (opened and closed) is that indexing and querying becomes simpler and more performant. For example, to get all the opened queries we no longer have to query both states: SELECT * FROM issues WHERE project_id = 2 AND state IN ('opened', 'reopened'); Instead we can query a single state directly, which can be much faster: SELECT * FROM issues WHERE project_id = 2 AND state = 'opened'; Further, only having two states makes indexing easier as we will only ever filter (and thus scan an index) using a single value. Partial indexes could help but aren't supported on MySQL, complicating the development process and not being helpful for MySQL.
-
Lin Jen-Shin authored
So the behaviour would be similar in CE and EE
-
Shinya Maeda authored
-
- 27 Jul, 2017 5 commits
-
-
Alexis Reigel authored
-
Alexis Reigel authored
-
Alexis Reigel authored
-
Alexis Reigel authored
-
Rémy Coutable authored
Also, fixing some calls to the :project factory with the :test_repo trait since this trait is already included in the :project factory. Signed-off-by: Rémy Coutable <remy@rymai.me>
-
- 20 Jul, 2017 3 commits
-
-
Alexander Randa authored
-
Grzegorz Bizon authored
-
Grzegorz Bizon authored
-
- 18 Jul, 2017 1 commit
-
-
Bob Van Landuyt authored
-