Discussion:
Odd glMultiDrawArrays behaviour...
(too old to reply)
John Irwin
2016-07-08 11:31:37 UTC
Permalink
Raw Message
Does anyone know why glMultiDrawArrays is causing me problems when I use it
to render line-strips from a vertex buffer containing more than 65536
vertices? No crashing but the results are obviously wrong.

The line-strips are stored sequentially in a single vertex buffer which will
usually contain more than this number of vertices, depending on user
interaction. So it's possible for the "first" array to contain vertex
indices greater than 65536. Could it be possible that the indices are being
converted to 16-bit values even though the input values are 32-bit? It looks
like only those line-strips which are referenced with indices greater than
65536 are rendered incorrectly; the rest up to this point are ok.

I suspect it's a problem with glMultiDrawArrays itself rather than the data
I supply to it. I can render from the same buffer with multiple calls to
glDrawArrays, one for each strip, and the results are ok. But because the
strips are in general quite small (<100 vertices each) and there are
thousands of strips, this would mean calling glDrawArrays thousands of times
in each rendering pass, which is far from efficient.

I cannot find any reference to this problem either online or in the OpenGL
documentation so it may just be a limitation, if not bug, in the OpenGL
implementation used by my graphics card. But I'd be interested to hear from
anyone who can confirm this behaviour or who can shed some insight on what I
might be going on here. Thanks very much.

John.
Nobody
2016-07-09 05:10:21 UTC
Permalink
Raw Message
On Fri, 08 Jul 2016 12:31:37 +0100, John Irwin wrote:

So:

a) glGetError() isn't reporting any errors, and
b) if you replace the glMultiDrawArrays() call with a call to the
function:

void myMultiDrawArrays(GLenum mode, const GLint* first,
const GLsizei* count, GLsizei drawcount)
{
GLsizei i;
for (i = 0; i < drawcount; i++)
glDrawArrays(mode, first[i], count[i]);
}

then everything works, just inefficiently?

If that's the case, I can't see how this can be anything other than a
driver bug.

glMultiDrawArrays() isn't affected by any GL state beyond that which
also affects glDrawArrays().
John Irwin
2016-07-10 09:43:44 UTC
Permalink
Raw Message
Post by Nobody
a) glGetError() isn't reporting any errors, and
b) if you replace the glMultiDrawArrays() call with a call to the
void myMultiDrawArrays(GLenum mode, const GLint* first,
const GLsizei* count, GLsizei drawcount)
{
GLsizei i;
for (i = 0; i < drawcount; i++)
glDrawArrays(mode, first[i], count[i]);
}
then everything works, just inefficiently?
That's the gist of it...
Post by Nobody
If that's the case, I can't see how this can be anything other than a
driver bug.
Thanks for your feedback. I suspect it's a driver bug too. Unfortunately my
graphics card is no longer supported so there is no prospect of an updated
driver.

However I've got glMultiDrawElements working with my vertex data, which is a
relief as that function requires you to specify explicitly the data type of
the indices. The downside is that I need to work with a large index buffer
containing the consecutive integers 0,1,2... which, in a sane world, would
normally be considered redundant. But it seems I've little choice in the
matter.

John.
puta
2016-09-28 05:26:40 UTC
Permalink
Raw Message
Post by John Irwin
Does anyone know why glMultiDrawArrays is causing me problems when I use it
to render line-strips from a vertex buffer containing more than 65536
vertices? No crashing but the results are obviously wrong.
The line-strips are stored sequentially in a single vertex buffer which will
usually contain more than this number of vertices, depending on user
interaction. So it's possible for the "first" array to contain vertex
indices greater than 65536. Could it be possible that the indices are being
converted to 16-bit values even though the input values are 32-bit? It looks
like only those line-strips which are referenced with indices greater than
65536 are rendered incorrectly; the rest up to this point are ok.
I suspect it's a problem with glMultiDrawArrays itself rather than the data
I supply to it. I can render from the same buffer with multiple calls to
glDrawArrays, one for each strip, and the results are ok. But because the
strips are in general quite small (<100 vertices each) and there are
thousands of strips, this would mean calling glDrawArrays thousands of times
in each rendering pass, which is far from efficient.
I cannot find any reference to this problem either online or in the OpenGL
documentation so it may just be a limitation, if not bug, in the OpenGL
implementation used by my graphics card. But I'd be interested to hear from
anyone who can confirm this behaviour or who can shed some insight on what I
might be going on here. Thanks very much.
John.
Does anyone know why glMultiDrawArrays is causing me problems when I use it
to render line-strips from a vertex buffer containing more than 65536
vertices? No crashing but the results are obviously wrong.
The line-strips are stored sequentially in a single vertex buffer which will
usually contain more than this number of vertices, depending on user
interaction. So it's possible for the "first" array to contain vertex
indices greater than 65536. Could it be possible that the indices are being
converted to 16-bit values even though the input values are 32-bit? It looks
like only those line-strips which are referenced with indices greater than
65536 are rendered incorrectly; the rest up to this point are ok.
I suspect it's a problem with glMultiDrawArrays itself rather than the data
I supply to it. I can render from the same buffer with multiple calls to
glDrawArrays, one for each strip, and the results are ok. But because the
strips are in general quite small (<100 vertices each) and there are
thousands of strips, this would mean calling glDrawArrays thousands of times
in each rendering pass, which is far from efficient.
I cannot find any reference to this problem either online or in the OpenGL
documentation so it may just be a limitation, if not bug, in the OpenGL
implementation used by my graphics card. But I'd be interested to hear from
anyone who can confirm this behaviour or who can shed some insight on what I
might be going on here. Thanks very much.
John.
Does anyone know why glMultiDrawArrays is causing me problems when I use it
to render line-strips from a vertex buffer containing more than 65536
vertices? No crashing but the results are obviously wrong.
The line-strips are stored sequentially in a single vertex buffer which will
usually contain more than this number of vertices, depending on user
interaction. So it's possible for the "first" array to contain vertex
indices greater than 65536. Could it be possible that the indices are being
converted to 16-bit values even though the input values are 32-bit? It looks
like only those line-strips which are referenced with indices greater than
65536 are rendered incorrectly; the rest up to this point are ok.
I suspect it's a problem with glMultiDrawArrays itself rather than the data
I supply to it. I can render from the same buffer with multiple calls to
glDrawArrays, one for each strip, and the results are ok. But because the
strips are in general quite small (<100 vertices each) and there are
thousands of strips, this would mean calling glDrawArrays thousands of times
in each rendering pass, which is far from efficient.
I cannot find any reference to this problem either online or in the OpenGL
documentation so it may just be a limitation, if not bug, in the OpenGL
implementation used by my graphics card. But I'd be interested to hear from
anyone who can confirm this behaviour or who can shed some insight on what I
might be going on here. Thanks very much.
John.
Does anyone know why glMultiDrawArrays is causing me problems when I use it
to render line-strips from a vertex buffer containing more than 65536
vertices? No crashing but the results are obviously wrong.
The line-strips are stored sequentially in a single vertex buffer which will
usually contain more than this number of vertices, depending on user
interaction. So it's possible for the "first" array to contain vertex
indices greater than 65536. Could it be possible that the indices are being
converted to 16-bit values even though the input values are 32-bit? It looks
like only those line-strips which are referenced with indices greater than
65536 are rendered incorrectly; the rest up to this point are ok.
I suspect it's a problem with glMultiDrawArrays itself rather than the data
I supply to it. I can render from the same buffer with multiple calls to
glDrawArrays, one for each strip, and the results are ok. But because the
strips are in general quite small (<100 vertices each) and there are
thousands of strips, this would mean calling glDrawArrays thousands of times
in each rendering pass, which is far from efficient.
I cannot find any reference to this problem either online or in the OpenGL
documentation so it may just be a limitation, if not bug, in the OpenGL
implementation used by my graphics card. But I'd be interested to hear from
anyone who can confirm this behaviour or who can shed some insight on what I
might be going on here. Thanks very much.
John.
Does anyone know why glMultiDrawArrays is causing me problems when I use it
to render line-strips from a vertex buffer containing more than 65536
vertices? No crashing but the results are obviously wrong.
The line-strips are stored sequentially in a single vertex buffer which will
usually contain more than this number of vertices, depending on user
interaction. So it's possible for the "first" array to contain vertex
indices greater than 65536. Could it be possible that the indices are being
converted to 16-bit values even though the input values are 32-bit? It looks
like only those line-strips which are referenced with indices greater than
65536 are rendered incorrectly; the rest up to this point are ok.
I suspect it's a problem with glMultiDrawArrays itself rather than the data
I supply to it. I can render from the same buffer with multiple calls to
glDrawArrays, one for each strip, and the results are ok. But because the
strips are in general quite small (<100 vertices each) and there are
thousands of strips, this would mean calling glDrawArrays thousands of times
in each rendering pass, which is far from efficient.
I cannot find any reference to this problem either online or in the OpenGL
documentation so it may just be a limitation, if not bug, in the OpenGL
implementation used by my graphics card. But I'd be interested to hear from
anyone who can confirm this behaviour or who can shed some insight on what I
might be going on here. Thanks very much.
John.
Does anyone know why glMultiDrawArrays is causing me problems when I use it
to render line-strips from a vertex buffer containing more than 65536
vertices? No crashing but the results are obviously wrong.
The line-strips are stored sequentially in a single vertex buffer which will
usually contain more than this number of vertices, depending on user
interaction. So it's possible for the "first" array to contain vertex
indices greater than 65536. Could it be possible that the indices are being
converted to 16-bit values even though the input values are 32-bit? It looks
like only those line-strips which are referenced with indices greater than
65536 are rendered incorrectly; the rest up to this point are ok.
I suspect it's a problem with glMultiDrawArrays itself rather than the data
I supply to it. I can render from the same buffer with multiple calls to
glDrawArrays, one for each strip, and the results are ok. But because the
strips are in general quite small (<100 vertices each) and there are
thousands of strips, this would mean calling glDrawArrays thousands of times
in each rendering pass, which is far from efficient.
I cannot find any reference to this problem either online or in the OpenGL
documentation so it may just be a limitation, if not bug, in the OpenGL
implementation used by my graphics card. But I'd be interested to hear from
anyone who can confirm this behaviour or who can shed some insight on what I
might be going on here. Thanks very much.
John.
Loading...