BigW Consortium Gitlab
By using Rouge::Lexer.find instead of find_fancy() and memoizing the HTML formatter we can speed up the highlighting process by between 1.7 and 1.8 times (at least when measured using synthetic benchmarks). To measure this I used the following benchmark: require 'benchmark/ips' input = '' Dir['./app/controllers/**/*.rb'].each do |controller| input << <<-EOF <pre><code class="ruby">#{File.read(controller).strip}</code></pre> EOF end document = Nokogiri::HTML.fragment(input) filter = Banzai::Filter::SyntaxHighlightFilter.new(document) puts "Input size: #{(input.bytesize.to_f / 1024).round(2)} KB" Benchmark.ips do |bench| bench.report 'call' do filter.call end end This benchmark produces 250 KB of input. Before these changes the timing output would be as follows: Calculating ------------------------------------- call 1.000 i/100ms ------------------------------------------------- call 22.439 (±35.7%) i/s - 93.000 After these changes the output instead is as follows: Calculating ------------------------------------- call 1.000 i/100ms ------------------------------------------------- call 41.283 (±38.8%) i/s - 148.000 Note that due to the fairly high standard deviation and this being a synthetic benchmark it's entirely possible the real-world improvements are smaller.
Name |
Last commit
|
Last update |
---|---|---|
.. | ||
filter | Loading commit data... | |
pipeline | Loading commit data... | |
reference_parser | Loading commit data... | |
cross_project_reference.rb | Loading commit data... | |
filter.rb | Loading commit data... | |
filter_array.rb | Loading commit data... | |
note_renderer.rb | Loading commit data... | |
object_renderer.rb | Loading commit data... | |
pipeline.rb | Loading commit data... | |
querying.rb | Loading commit data... | |
redactor.rb | Loading commit data... | |
reference_extractor.rb | Loading commit data... | |
reference_parser.rb | Loading commit data... | |
renderer.rb | Loading commit data... |