BigW Consortium Gitlab

  1. 19 Oct, 2016 1 commit
  2. 18 Oct, 2016 2 commits
  3. 16 Oct, 2016 1 commit
  4. 13 Oct, 2016 1 commit
  5. 11 Oct, 2016 1 commit
  6. 10 Oct, 2016 1 commit
  7. 07 Oct, 2016 1 commit
    • Enable CacheMarkdownField for the remaining models · 99205515
      Nick Thomas authored
      This commit alters views for the following models to use the markdown cache if
      present:
      
      * AbuseReport
      * Appearance
      * ApplicationSetting
      * BroadcastMessage
      * Group
      * Issue
      * Label
      * MergeRequest
      * Milestone
      * Project
      
      At the same time, calls to `escape_once` have been moved into the `single_line`
      Banzai pipeline, so they can't be missed out by accident and the work is done
      at save, rather than render, time.
  8. 04 Oct, 2016 2 commits
  9. 03 Oct, 2016 2 commits
  10. 30 Sep, 2016 2 commits
  11. 28 Sep, 2016 1 commit
    • AbstractReferenceFilter caches current project_ref on RequestStore when active · c3f50416
      Paco Guzman authored
      Before we weren’t caching current_project_ref because normally the reference to 
      the current project doesn’t include the path with namespace. But now we store 
      the current project in the projects reference cache to be used for the same 
      filter when accessing using path with namespace of for subsequent filters executed on the cache.
  12. 23 Sep, 2016 2 commits
  13. 14 Sep, 2016 1 commit
  14. 31 Aug, 2016 1 commit
  15. 14 Aug, 2016 1 commit
    • Fix a memory leak caused by Banzai::Filter::SanitizationFilter · 504a3b5e
      Ahmad Sherif authored
      In Banzai::Filter::SanitizationFilter#customize_whitelist, we append
      three lambdas that has reference to the SanitizationFilter instance,
      which in turn (potentially) has a reference to the following chain:
      
      context hash -> Project instance -> Repository instance -> lookup hash
      -> various Rugged instances -> various mmap-ed git pack files.
      
      All of the above is not garbage collected because the array we append
      the lambdas to is the constant
      HTML::Pipeline::SanitizationFilter::WHITELIST.
  16. 06 Aug, 2016 2 commits
  17. 04 Aug, 2016 1 commit
  18. 03 Aug, 2016 2 commits
    • Improve performance of SyntaxHighlightFilter · 038d6feb
      Yorick Peterse authored
      By using Rouge::Lexer.find instead of find_fancy() and memoizing the
      HTML formatter we can speed up the highlighting process by between 1.7
      and 1.8 times (at least when measured using synthetic benchmarks). To
      measure this I used the following benchmark:
      
          require 'benchmark/ips'
      
          input = ''
      
          Dir['./app/controllers/**/*.rb'].each do |controller|
            input << <<-EOF
            <pre><code class="ruby">#{File.read(controller).strip}</code></pre>
      
            EOF
          end
      
          document = Nokogiri::HTML.fragment(input)
          filter = Banzai::Filter::SyntaxHighlightFilter.new(document)
      
          puts "Input size: #{(input.bytesize.to_f / 1024).round(2)} KB"
      
          Benchmark.ips do |bench|
            bench.report 'call' do
              filter.call
            end
          end
      
      This benchmark produces 250 KB of input. Before these changes the timing
      output would be as follows:
      
          Calculating -------------------------------------
                          call     1.000  i/100ms
          -------------------------------------------------
                          call     22.439  (±35.7%) i/s -     93.000
      
      After these changes the output instead is as follows:
      
      Calculating -------------------------------------
                      call     1.000  i/100ms
      -------------------------------------------------
                      call     41.283  (±38.8%) i/s -    148.000
      
      Note that due to the fairly high standard deviation and this being a
      synthetic benchmark it's entirely possible the real-world improvements
      are smaller.
    • Improve AutolinkFilter#text_parse performance · dd35c3dd
      Yorick Peterse authored
      By using clever XPath queries we can quite significantly improve the
      performance of this method. The actual improvement depends a bit on the
      amount of links used but in my tests the new implementation is usually
      around 8 times faster than the old one. This was measured using the
      following benchmark:
      
          require 'benchmark/ips'
      
          text = '<p>' + Note.select("string_agg(note, '') AS note").limit(50).take[:note] + '</p>'
          document = Nokogiri::HTML.fragment(text)
          filter = Banzai::Filter::AutolinkFilter.new(document, autolink: true)
      
          puts "Input size: #{(text.bytesize.to_f / 1024 / 1024).round(2)} MB"
      
          filter.rinku_parse
      
          Benchmark.ips(time: 15) do |bench|
            bench.report 'text_parse' do
              filter.text_parse
            end
      
            bench.report 'text_parse_fast' do
              filter.text_parse_fast
            end
      
            bench.compare!
          end
      
      Here the "text_parse_fast" method is the new implementation and
      "text_parse" the old one. The input size was around 180 MB. Running this
      benchmark outputs the following:
      
          Input size: 181.16 MB
          Calculating -------------------------------------
                    text_parse     1.000  i/100ms
               text_parse_fast     9.000  i/100ms
          -------------------------------------------------
                    text_parse     13.021  (±15.4%) i/s -    188.000
               text_parse_fast    112.741  (± 3.5%) i/s -      1.692k
      
          Comparison:
               text_parse_fast:      112.7 i/s
                    text_parse:       13.0 i/s - 8.66x slower
      
      Again the production timings may (and most likely will) vary depending
      on the input being processed.
  19. 02 Aug, 2016 2 commits
  20. 26 Jul, 2016 1 commit
  21. 20 Jul, 2016 2 commits
  22. 19 Jul, 2016 5 commits
  23. 18 Jul, 2016 1 commit
  24. 16 Jul, 2016 1 commit
  25. 15 Jul, 2016 1 commit
  26. 14 Jul, 2016 2 commits