1. 02 May, 2014 1 commit
  2. 28 Apr, 2014 1 commit
  3. 26 Apr, 2014 3 commits
  4. 17 Apr, 2014 1 commit
  5. 12 Apr, 2014 1 commit
  6. 08 Apr, 2014 1 commit
  7. 07 Apr, 2014 1 commit
  8. 25 Mar, 2014 1 commit
  9. 25 Feb, 2014 1 commit
    • Dave Airlie's avatar
      st/mesa: add texture gather support. (v2) · 7c3138ac
      Dave Airlie authored
      This adds support for GL_ARB_texture_gather, and one step of
      support for GL_ARB_gpu_shader5.
      This adds support for passing the TG4 instruction, along
      with non-constant texture offsets, and tracking them for the
      optimisation passes.
      This doesn't support native textureGatherOffsets hw, to do that
      you'd need to add a CAP and if set disable the lowering pass,
      and bump the MAX offsets to 4, then do the i0,j0 sampling using
      Signed-off-by: default avatarDave Airlie <airlied@redhat.com>
  10. 23 Feb, 2014 1 commit
  11. 20 Feb, 2014 1 commit
  12. 12 Feb, 2014 1 commit
  13. 05 Feb, 2014 1 commit
  14. 22 Jan, 2014 1 commit
    • Paul Berry's avatar
      mesa: Replace _mesa_program_index_to_target with _mesa_shader_stage_to_program. · 46d210d3
      Paul Berry authored
      In my recent zeal to refactor Mesa's handling of the gl_shader_stage
      enum, I accidentally wound up with two functions that do the same
      thing: _mesa_program_index_to_target(), and
      This patch keeps _mesa_shader_stage_to_program(), since its name is
      more consistent with other related functions.  However, it changes the
      signature so that it accepts an unsigned integer instead of a
      gl_shader_stage--this avoids awkward casts when the function is called
      from C++ code.
      Reviewed-by: default avatarChris Forbes <chrisf@ijw.co.nz>
      Reviewed-by: default avatarBrian Paul <brianp@vmware.com>
  15. 13 Jan, 2014 3 commits
  16. 09 Jan, 2014 2 commits
  17. 08 Jan, 2014 4 commits
  18. 30 Dec, 2013 1 commit
    • Paul Berry's avatar
      Rename overloads of _mesa_glsl_shader_target_name(). · 26707abe
      Paul Berry authored
      Previously, _mesa_glsl_shader_target_name() had an overload for GLenum
      and an overload for the gl_shader_type enum, each of which behaved
      differently.  However, since GLenum is a synonym for unsigned int, and
      unsigned ints are often used in place of gl_shader_type (e.g. in loop
      indices), there was a big risk of calling the wrong overload by
      mistake.  This patch gives the two overloads different names so that
      it's always clear which one we mean to call.
      Reviewed-by: default avatarBrian Paul <brianp@vmware.com>
  19. 12 Dec, 2013 2 commits
  20. 09 Dec, 2013 3 commits
    • Paul Berry's avatar
      glsl/loops: Get rid of lower_bounded_loops and ir_loop::normative_bound. · 088494aa
      Paul Berry authored
      Now that loop_controls no longer creates normatively bound loops,
      there is no need for ir_loop::normative_bound or the
      lower_bounded_loops pass.
      Reviewed-by: default avatarIan Romanick <ian.d.romanick@intel.com>
    • Paul Berry's avatar
      glsl/loops: replace loop controls with a normative bound. · e00b93a1
      Paul Berry authored
      This patch replaces the ir_loop fields "from", "to", "increment",
      "counter", and "cmp" with a single integer ("normative_bound") that
      serves the same purpose.
      I've used the name "normative_bound" to emphasize the fact that the
      back-end is required to emit code to prevent the loop from running
      more than normative_bound times.  (By contrast, an "informative" bound
      would be a bound that is informational only).
      Reviewed-by: default avatarJordan Justen <jordan.l.justen@intel.com>
      Reviewed-by: default avatarIan Romanick <ian.d.romanick@intel.com>
    • Paul Berry's avatar
      glsl/loops: consolidate bounded loop handling into a lowering pass. · 2c17f97f
      Paul Berry authored
      Previously, all of the back-ends (ir_to_mesa, st_glsl_to_tgsi, and the
      i965 fs and vec4 visitors) had nearly identical logic for handling
      bounded loops.  This replaces the duplicate logic with an equivalent
      lowering pass that is used by all the back-ends.
      Note: on i965, there is a slight increase in instruction count.  For
      example, a loop like this:
          for (int i = 0; i < 100; i++) {
            total += i;
      would previously compile down to this (vec4) native code:
                mov(8)       g4<1>.xD 0D
                mov(8)       g8<1>.xD 0D
                cmp.ge.f0(8) null     g8<4;4,1>.xD 100D
          (+f0) break(8)
                add(8)       g5<1>.xD g5<4;4,1>.xD g4<4;4,1>.xD
                add(8)       g8<1>.xD g8<4;4,1>.xD 1D
                add(8)       g4<1>.xD g4<4;4,1>.xD 1D
                while(8) loop
      After this patch, the "(+f0) break(8)" turns into:
          (+f0) if(8)
      because the back-end isn't smart enough to recognize that "if
      (condition) break;" can be done using a conditional break instruction.
      However, it should be relatively easy for a future peephole
      optimization to properly optimize this.
      Reviewed-by: default avatarJordan Justen <jordan.l.justen@intel.com>
      Reviewed-by: default avatarIan Romanick <ian.d.romanick@intel.com>
  21. 22 Nov, 2013 1 commit
  22. 21 Nov, 2013 1 commit
  23. 15 Nov, 2013 2 commits
  24. 29 Oct, 2013 1 commit
  25. 17 Oct, 2013 1 commit
  26. 07 Oct, 2013 3 commits