BigW Consortium Gitlab

  1. 16 Oct, 2016 1 commit
  2. 14 Oct, 2016 1 commit
  3. 13 Oct, 2016 1 commit
  4. 11 Oct, 2016 1 commit
  5. 10 Oct, 2016 1 commit
  6. 07 Oct, 2016 3 commits
    • Enable CacheMarkdownField for the remaining models · 99205515
      Nick Thomas authored
      This commit alters views for the following models to use the markdown cache if
      present:
      
      * AbuseReport
      * Appearance
      * ApplicationSetting
      * BroadcastMessage
      * Group
      * Issue
      * Label
      * MergeRequest
      * Milestone
      * Project
      
      At the same time, calls to `escape_once` have been moved into the `single_line`
      Banzai pipeline, so they can't be missed out by accident and the work is done
      at save, rather than render, time.
    • Use CacheMarkdownField for notes · 109816c4
      Nick Thomas authored
    • Add markdown cache columns to the database, but don't use them yet · e94cd6fd
      Nick Thomas authored
      This commit adds a number of _html columns and, with the exception of Note,
      starts updating them whenever the content of their partner fields changes.
      
      Note has a collision with the note_html attr_accessor; that will be fixed later
      
      A background worker for clearing these cache columns is also introduced - use
      `rake cache:clear` to set it off. You can clear the database or Redis caches
      separately by running `rake cache:clear:db` or `rake cache:clear:redis`,
      respectively.
  7. 04 Oct, 2016 2 commits
  8. 03 Oct, 2016 2 commits
  9. 30 Sep, 2016 2 commits
  10. 28 Sep, 2016 2 commits
  11. 23 Sep, 2016 2 commits
  12. 14 Sep, 2016 1 commit
  13. 31 Aug, 2016 1 commit
  14. 30 Aug, 2016 1 commit
  15. 14 Aug, 2016 1 commit
    • Fix a memory leak caused by Banzai::Filter::SanitizationFilter · 504a3b5e
      Ahmad Sherif authored
      In Banzai::Filter::SanitizationFilter#customize_whitelist, we append
      three lambdas that has reference to the SanitizationFilter instance,
      which in turn (potentially) has a reference to the following chain:
      
      context hash -> Project instance -> Repository instance -> lookup hash
      -> various Rugged instances -> various mmap-ed git pack files.
      
      All of the above is not garbage collected because the array we append
      the lambdas to is the constant
      HTML::Pipeline::SanitizationFilter::WHITELIST.
  16. 06 Aug, 2016 2 commits
  17. 04 Aug, 2016 1 commit
  18. 03 Aug, 2016 2 commits
    • Improve performance of SyntaxHighlightFilter · 038d6feb
      Yorick Peterse authored
      By using Rouge::Lexer.find instead of find_fancy() and memoizing the
      HTML formatter we can speed up the highlighting process by between 1.7
      and 1.8 times (at least when measured using synthetic benchmarks). To
      measure this I used the following benchmark:
      
          require 'benchmark/ips'
      
          input = ''
      
          Dir['./app/controllers/**/*.rb'].each do |controller|
            input << <<-EOF
            <pre><code class="ruby">#{File.read(controller).strip}</code></pre>
      
            EOF
          end
      
          document = Nokogiri::HTML.fragment(input)
          filter = Banzai::Filter::SyntaxHighlightFilter.new(document)
      
          puts "Input size: #{(input.bytesize.to_f / 1024).round(2)} KB"
      
          Benchmark.ips do |bench|
            bench.report 'call' do
              filter.call
            end
          end
      
      This benchmark produces 250 KB of input. Before these changes the timing
      output would be as follows:
      
          Calculating -------------------------------------
                          call     1.000  i/100ms
          -------------------------------------------------
                          call     22.439  (±35.7%) i/s -     93.000
      
      After these changes the output instead is as follows:
      
      Calculating -------------------------------------
                      call     1.000  i/100ms
      -------------------------------------------------
                      call     41.283  (±38.8%) i/s -    148.000
      
      Note that due to the fairly high standard deviation and this being a
      synthetic benchmark it's entirely possible the real-world improvements
      are smaller.
    • Improve AutolinkFilter#text_parse performance · dd35c3dd
      Yorick Peterse authored
      By using clever XPath queries we can quite significantly improve the
      performance of this method. The actual improvement depends a bit on the
      amount of links used but in my tests the new implementation is usually
      around 8 times faster than the old one. This was measured using the
      following benchmark:
      
          require 'benchmark/ips'
      
          text = '<p>' + Note.select("string_agg(note, '') AS note").limit(50).take[:note] + '</p>'
          document = Nokogiri::HTML.fragment(text)
          filter = Banzai::Filter::AutolinkFilter.new(document, autolink: true)
      
          puts "Input size: #{(text.bytesize.to_f / 1024 / 1024).round(2)} MB"
      
          filter.rinku_parse
      
          Benchmark.ips(time: 15) do |bench|
            bench.report 'text_parse' do
              filter.text_parse
            end
      
            bench.report 'text_parse_fast' do
              filter.text_parse_fast
            end
      
            bench.compare!
          end
      
      Here the "text_parse_fast" method is the new implementation and
      "text_parse" the old one. The input size was around 180 MB. Running this
      benchmark outputs the following:
      
          Input size: 181.16 MB
          Calculating -------------------------------------
                    text_parse     1.000  i/100ms
               text_parse_fast     9.000  i/100ms
          -------------------------------------------------
                    text_parse     13.021  (±15.4%) i/s -    188.000
               text_parse_fast    112.741  (± 3.5%) i/s -      1.692k
      
          Comparison:
               text_parse_fast:      112.7 i/s
                    text_parse:       13.0 i/s - 8.66x slower
      
      Again the production timings may (and most likely will) vary depending
      on the input being processed.
  19. 02 Aug, 2016 2 commits
  20. 29 Jul, 2016 1 commit
    • Method for returning issues readable by a user · 002ad215
      Yorick Peterse authored
      The method Ability.issues_readable_by_user takes a list of users and an
      optional user and returns an Array of issues readable by said user. This
      method in turn is used by
      Banzai::ReferenceParser::IssueParser#nodes_visible_to_user so this
      method no longer needs to get all the available abilities just to check
      if a user has the "read_issue" ability.
      
      To test this I benchmarked an issue with 222 comments on my development
      environment. Using these changes the time spent in nodes_visible_to_user
      was reduced from around 120 ms to around 40 ms.
  21. 26 Jul, 2016 1 commit
  22. 21 Jul, 2016 1 commit
  23. 20 Jul, 2016 2 commits
  24. 19 Jul, 2016 6 commits