Jump to content
Hash, Inc. Forums

Rendering: Optimizations (Discussion)


Rodney

Recommended Posts

  • Admin

Rendering presents a wide category of interests but I'd like to discuss a few relative to what we have in A:M and what we can do as users to get the most out of that.

 

While the discussion can certainly go far afield to include external rendering solutions the focus here is on what we can do with the internal renderer. Netrender is also an important factor to consider although for the purpose of this discussion I'd classify that as an external renderer also.

 

At the heart of my present thinking is the idea of duplicate frames in a sequence and that may be where this topic starts and ends because for all intents and purposes in a purely 3D lit and rendered scene there may technically speaking not be any such thing as two frames that are exactly the same. The norm is that some pixels will change in every image within any given sequence.

 

A part of my thinking also rests in that I think I may roughly know how A:M renders when in fact I really do not.

For instance, I am reasonably sure that pixels are sampled, targeted or measured from the pov of the camera and that data is then pushed into a file; an image.

What is not clear to me is if A:M does a similar test that traverses down the sequence of frames in order to determine what (if any) pixels have changed over the course of a stack of (potential) images.

I must assume that A:M does this or can do this and perhaps it most likely does this when Multipass is turned on.

Once the data is read in there is much that can be done with it at little cost to render time because in a way almost all potentials are there already in memory... they just haven't been written (rendered) to disk.

 

Yet another part of this thinking then is based on what is happening how we as users can optimize our own efforts so we aren't working against the optimizations of the renderer.

 

A case to examine might be a 30 frame render of the default Chor where nothing changes.

Frame 1 is exactly the same as frame 2, and frame 3 and so on.

If A:M's renderer knew that they were all the same it might simply render the first frame and then duplicate all of the other 29 frames, theoretically saving a lot of render time.

But there is not a lot of data that would inform A:M's renderer this was the case other than the fact that there are no channels with animated movement to be found in the entire project.

That would be useful information to pass on if it isn't already.

 

Tests could certainly be ran in order to educate ourselves and there are other options we can pursue.

For example, as users we might know that nothing has changed in a sequence so, using the case above, we might only render frame one and duplicate it ourselves via external program (or in A:M via 'Save Animation As' which combines images together quite speedily.**

 

We might also decide that due to the style of our animation we might want a more choppy or direct movement from frame to frame and so use the Step option to render out only every 5 frames in our sequence. We might then use an external program to pad duplicate frame inside those gaps to save rendering time... or create a batch file or utility that simply copies frames and renames them with directions to fill in the gaps for us.

 

A specific case we might investigate would be a sequence where all keyframes of an animated sequence are keyed on fours (every fourth frame).

If the interpolation between those keys is stepped (i.e. Hold) then every four frames is the same.

So re-rendering the 2nd, 3rd and 4th frames might be deemed wasted rendering cycles.

But I don't think we have necessarily told A:M what to look for with regard to linear rendering of this sort.

Most renderers are more akin to say.... Pixar's approach where every frame is basically a new start from scratch with the data being processed all over again with little or no regard to adjacent frames; they very likely have never met or even seen their neighbors.

 

My thought in this regard is that if we know that we have a stack of images that are all 24x24 pixels then we should be able to run a few probes through from frame 1 to frame 30 to sort (or extract) same or similar frames and optimize rendering. If the probes determine every pixel is different then it might be assumed that another fully nonlinear approach to optimization can and likely should be used. If however, the probes find similarities then the frames are sorted according to how similar they are and... I would imagine... the two frames that are most different... on opposing sides of the spectrum... are rendered first. The easiest to render might be used to create the initial preview image while the most difficult would get the lion's share of the computers resources reserved for it because all other frames might then use it as a reference point.

 

I know my thoughts are entirely too naive and also that A:M already does a wonderful job of crunching data and spitting out images.

I am just trying to better understand how I can best interface with it all.

 

Any thoughts?

Link to comment
Share on other sites

  • Admin
that's your own call anyway, not the renderer's

 

Yes, and that is also the point of my query here.

I'm trying to improve upon that call to better understand how to plan and maximize compositing and rendering especially given the tools available in A:M.

 

My thoughts some time back led me to suggest a few render related UI enhancements which Steffen graciously implemented and we can use today.

The first (or one of the first) in that series of optimizations was to have A:M save the project file prior to rendering... because if they are like me, other users fail to do that.

The underlying thought behind that was that, prior to rendering, a great many settings were only in memory and should some error occur anywhere during a potentially very long rendering session... all that data could vanish without a trace.

So, it makes good sense to save before we render.

Step one accomplished. (with minor adjustments later implemented to refine the exact moment where a project is saved).

 

Another implementation allows users to see all options available (or nearly all) at the same time in the render panel.

Steffen plussed this up with some additions of his own and it's a useful improvement.

 

There are other aspects of the saving of images that often elude A:M Users to their own detriment and you mention compositing.

That certainly is an area where users can save time and perhaps even avoid rendering altogether (where images already exist).

But that 'Save As Animation' option isn't featured very prominently so it is likely many users don't take advantage of it's capabilities.

 

Many users resort to the use of external programs because they don't know Animation:Master can perform a specific operation.

It'd be nice if that were not the case.

 

I personally have a tendency to lean toward external compositors but that is largely due to dealing with images that already exist such as scanned in drawings or digitally drawn doodles and animation. As A:M is not specifically optimized for compositing it makes good sense to composite those elsewhere. At the point where anything involved touches something created with A:M the consideration changes and the question is at least asked, "Can this operation be performed exclusively in A:M"?. Sometimes it can.

 

Of the two options, compositing and rendering, composting tends to take considerably less time.

Therefore compositing considerations should be on the high end of our personal considerations.

A project solely created in A:M with no external references is a very good candidate for rendering (i.e. there's nothing to composite yet).

 

With every rendering I am dealing with two entities that are largely clueless; me as the user and the renderer.

The renderer gets a clue when we give it to them (via user settings and project files or the underlying programming instructions).

I can get a clue by learning more about how to make better and more timely decisions.

Link to comment
Share on other sites

  • Admin

Another decision point in this is that of formats themselves... both the interim and final file formats.

 

As an example, consider that frame 1 of a given sequence is 1kb in size on disk.

If all other frames in a 30 frame sequence were exactly the same as this frame each frame would contribute an additional 1kb to the whole sequence.

This is of course where compression comes to play and algorithms are quite good at recognizing patterns and encoding/reducing the footprint of duplication.

I will assume that in most cases the application of any codec and optimizations associated with it is applied after rendering is complete in order to take full advantage of pattern matching in the streams of bits and bytes.

I'd like to think that these could occur simultaneously (or perhaps in parallel*) so that patterns matched in rendering can also apply to compression of data and vice versa.

 

This might go against the idea of that it is always better to render twice as large as to render the same thing a second time but that might also suggest that a dual rendering system where a low rez preview and full rez final render might be launched simultaneously. The user would see the preview which would render very quickly while the full render ran the full course in the background. The low rez render could be set to be updated periodically with data from the larger render as specified by the user in the preview settings (i.e. never, low quality, high quality, etc.). At any time the user could terminate the render and still walk away with the image created at that moment's resolution ala a 'good enough to serve my current purpose' decision. The important thing here is to return the UI to the user as soon a possible. All of this is not unlike A:M's Quick Render and Progressive Rendering directly in the interface. I wouldn't be surprised if it were a near exact equivalent.

 

What's the difference you say?

Well, for one thing, all of the various compositing and rendering options cannot be found in one place.

A case in point might be the difference between the basic Render Settings of Tools/Options and the Render Settings of Cameras.

It is important that we have both... don't misunderstand me here.

But it might also be nice to be able to change a setting in one place and know that change will be reflected in the other.

That might be accomplished through an option to "Link all cameras to default Render Settings."

This setting might be considered dangerous if it actually overwrote camera settings so it would best be implemented by non destructive means.

It might require the creation of a new default camera that is always coupled with Render Settings.

Other cameras would then be either external or embedded and independent.

It'd be interesting to diagram this out and see what success might look like.

 

There are a lot of moving pieces and parts involved in rendering.

Optimal file formats and camera settings are two to consider prior to the outset of rendering.

I must assume A:M's ultimate 'format' is one we never see rendered to image file or screen.

The data stored to define the near infinite resolution of splines and patches that is.

As for the camera.... I'll have to ponder a little more about that.

Link to comment
Share on other sites

  • Hash Fellow

I can answer one of these...

 

 

For instance, I am reasonably sure that pixels are sampled, targeted or measured from the pov of the camera and that data is then pushed into a file; an image.

What is not clear to me is if A:M does a similar test that traverses down the sequence of frames in order to determine what (if any) pixels have changed over the course of a stack of (potential) images.

 

 

No, A:M does not.

The only test that could ascertain if a pixel in frame 2 is exactly the same as in frame 1 is to fully render it and compare.

 

A faster shortcut test that worked would need to determine every aspect of that pixel to be of any use as a test for this purpose. If it did that, it would also have all the information needed to correctly write that pixel to the image file... IOW it would be rendering the pixel.

 

If there were a faster way of rendering the pixel, that would be used as the renderer.

Also, just sampling some pixels in advance won't cover 100% the cases of where pixels could change from frame to frame. Less than 100% would lead to omissions of things that should have been rendered but were presumed to not need to be rendered.


If there were some accurate test that wasn't rendering... it would still have to be not just faster per pixel but WAY faster than regular rendering to justify spending CPU time on it that could have been spent on rendering.

This is a paraphrase of something Martin said when someone asked why the prediction for how long a frame would take to render wasn't more accurate at the start of the render. Basically, the only way to accurately know how long all the pixels in a frame will take to render is to render them.

Link to comment
Share on other sites

  • Admin
The only test that could ascertain if a pixel in frame 2 is exactly the same as in frame 1 is to fully render it and compare.

 

That certainly makes sense.

There is an aspect of that that concerns itself more with accuracy than with prediction that I'm still not having sink in to my brain.

Of course, I'm not sure how to put into into the language I need with terms and concepts that are agreed upon.

 

As soon as I see the words, "The only test..." that rings bells in my brain but that's because the equation is framed by the latter part of the statement, namely, ..."ascertain if a pixel in frame 2 is exactly the same as in frame 1...".

 

There are several constraints that may lock that statement down but they don't necessarily apply to the broader scope of the investigation.

For instance in that statement we have necessarily limited the equation to 'pixels' in order to better define that specific test.

But even there I find a little wiggle room in that a test and a prediction aren't the same thing. The test only informs us about the accuracy of the prediction.

So we might postulate that all pixels at a certain location are within a specific tolerance (without rendering) and then render it to assign a value to that prediction (presumably a value of 0.0 would be given to no correlation and a value of 1.0 would be assigned to a result that matched exactly. With this in mind a ray cast down the stack of (potential) frames becomes just like launching rays toward the virtual objects in a scene. Predictive outcomes are postulated but that prediction is updated with the return of each new ray returns with new information. So in essence we may not need to know that some random pixel at a given x,y,z location in time and space is the same as another. It may be sufficient to know that one pixel in the upper left corner is the same or different. If different then... that's important information that can be used later. If the same, that also is informative.

 

It may help to think of a ray shot into any given linear space as two ends of a spline (ohoh... here we go!) with two control points.

Let's also predict even beforehand that we will eventually be using that spline to draw (or render) a plane that describes the journey along that spline (coupled with an array of other splines) though temporal space. (But more on that later)

 

There are specific details we can assign to this spline based upon where it is located and what is encountered.

Let's say that each of these two control points are constrained to remain in the same relative location they are exactly linear with no deviation.

Note that before raycasting they occupy the same space on a single point but as one of these control points is shot out it does not return (or terminate) until it hits something or has completed its mission.

 

Because we might know beforehand there are 30 frames of time the ray will penetrate through we can tell it to travel to any of those frames and then terminate.

This can be useful if we want to target a specific frame (and not all frames or other frames).

We might also set up a receiver (at frame 31) that catches the control point and announces the arrival.

So a ray shot from frame 1 to frame 31 might encounter no resistance (i.e. none influential change) and the event tagged with a value of 1.0 to show that whole range is linearly the same.

 

Now.. this gets crazy pretty fast so let's scale this back in a little.

We may (theoretically) want to test for every possible value where test and find that something has changed. (Think of testing millions of colors vs just testing black vs white.... the latter is faster... the former mostly a matter of scale).

 

Think of this again as our spline traveling from frame 1 to frame 30 along a linear path but then it encounters a change at frame 3.

That's like hitting the Y key on a spline to dissect it in half but then moving that newly created control point to a position at similar scale.

And interesting things begin to happen when we perceive that those changes indicate movement through time and space.

 

And at this point it may be important to realize that without rendering anything we already have a large body of data available to predict how those frames will change.

When we look at the channels of our Timeline they are all on display.

 

So, perhaps now we can better see how we can predict how pixels from any number of frames will be the same or different from other frames.

And all of this is predictive before we render anything to disk but also from the perspective of testing through the large number of pixels that have already been rendered... often many times over and over again... to screen).

 

That data from those real time renders is just thrown away... wasted (by the user and presumably to a very large extent also by the program beneath).

And therefore our travels thus far will not be of much assistance in optimizing any 'final' rendering.

 

 

Note: A:M does allow users to save Preview Animation directly onscreen via Right Click > Save Animation As

I'd like to be wrong here but I doubt many users take advantage of this. I know I haven't.

Since starting this topic I have assigned a shortcut key combo of Alt + R to Preview Render and hope to use that more in the future.

It's especially nice in filling an area where I thought A:M was lacking... cropping of imagery.

I now see that I was mistaken and Hash Inc yet again anticipated my needs.

Link to comment
Share on other sites

  • Hash Fellow

Consider this... what is the prediction doing that makes it faster than the render?

 

It must be leaving something out.

 

That thing left out is what makes the prediction not 100% accurate. In a render with a half million pixel, even a 1% error of pixel wrongly rendered would be noticeable and not useable.

Link to comment
Share on other sites

  • Admin
That thing left out is what makes the prediction not 100% accurate. In a render with a half million pixel, even a 1% error of pixel wrongly rendered would be noticeable and not useable.

 

This is something I brought up at the very beginning.

That with 3D rendering and especially for 3D animation the norm is going to be for things to change.

But this does not mean that things (ultimately pixels) do change and in fact very often on a frame by frame basis they do not change.

This is more often than not a stylistic choice but also falls into the realm of just-in-in time optimization.

 

Example: If an animator is animating in a blocking methodology and is using a stepped mode approach where intepolations are held rather than eased in/out etc. We might know for a fact that certain frames do not change. And yet, a renderer will approach each of those frames that are exactly the same as if they are entirely different; the opposite of optimization.

We've got to admit that can represent considerable waste.

 

Added: It is here that this investigation runs into differences between realtime and final rendering approaches..... and the large swath of territory inbetween.

Link to comment
Share on other sites

  • Hash Fellow

At the blocking stage you don't do final renders, and the difference in time saved by excluding repeated frames from a shaded render will be negligible.

 

If you did want final renders at the blocking stage you could give A:M a custom list of frame numbers to render in the frame range parameter.

Link to comment
Share on other sites

  • Admin

At the blocking stage you don't do final renders,

 

 

Well YOU don't. I do. Constantly. ;)

Yes, it's definitely a bad habit.

 

I may be way off track but I think if we polled A:M users most would admit to running final renders continuously.

They just change the settings to limit excessive render times.

 

If you did want final renders at the blocking stage you could give A:M a custom list of frame numbers to render in the frame range parameter.

 

 

That'd be cool if A:M could keep track of those automatically but it can't.

Although that is somewhat what I was suggesting could happen so I'll add that as an Exhibit A.

 

For discussions sake... I'm not suggesting any new features... I'm exploring and investigating....

What if that 'custom list of frames' was driven by the keyframes of the timeline.

I know that you tend to animate on 4s so... that might might equate to 1, 5, 9, 13, etc.... just like traditional animators!

A post processing might even fill in the gaps with frames 2 - 4 (as copies of 1), 6-8 (as copies of 5), 10-12 (as copies of 9).

 

Of course these things will generally be accomplished in a compositor so all we have to do is composite.

That compositor can even be an HTML page where a simple reference does the job of 'copying' for us.

So a fully playable view of 13 frames could be viewed based on only those 4 frames rendered.

All of the inbetweens frames would simply reference (re-reference) the keyframes.

 

Aside: This may be where we cycle back to consider file formats again because with an HTML compositor only a few image formats will be appropriate; PNG being the most likely candidate.

 

I shouldn't go to far afield here because I will eventually get back to concepts that tend to cause pain and apathy; concepts such as having the renderer always render to the same exact location. Folks can't seem to get beyond that to be able to move on to the next stage which takes away all the pain. This is my failure not theirs.

Link to comment
Share on other sites

  • Hash Fellow

 

For discussions sake... I'm not suggesting any new features... I'm exploring and investigating....

What if that 'custom list of frames' was driven by the keyframes of the timeline.

 

 

An app that searched an A:M Chor for any and all keyframes could make a list their times, eliminate the duplicates, and present that list to you, to be pasted into the custom frame range.

 

That would be possible with the "string" tools that most languages have.

Link to comment
Share on other sites

  • Admin

That would simply mean animating at 6 FPS, right?

 

Not unless I"m misunderstanding something here.
I would expect it to be either 24 FPS (traditionally) or 30 FPS (as the default in A:M).
The FPS is independent of where the keyframes are located.
But you raise a good point here in that 4 frames at 6 FPS would run as if it were 24 frames at 24 FPS.
Similarly, 5 frames at 6 FPS would run as if it were 30 frames at 30 FPS.
The downside of that straight conversion would be that all the frames would be stretched evenly unless some method were presented to allow for applying ease (slow in/slow out).
A compositor (replicator?) would need to have a means to accelerate or decelerate the references to file in a variety of meaningful ways.
Link to comment
Share on other sites

  • Admin

Okay. This is going to go even farther afield so I apologize in advance.

Every once in awhile I run across a (scholarly) paper that outlines a specific approach... usually to graphics... and something catches my eye.
Less frequently I recognize something in the description; terminology or concepts that I actually understand to some degree.
I am still shocked and amazed (and amused) when the implication is that I have guessed something right.
It doesn't take much to amuse so this can take the form of a specific word that seems to perfectly fit a processes.
Examples of this might be 'spatial and temporal coherence' or statements of assertion like 'it is always better to do x than y' where I happened to guess at some point in time that x was generally preferable to y.
This doesn't mean a lot. Both the paper and my silly guesses could be wrong.
But for a brief moment... the lights flicker as if they are about to go on.

And this is where things get weird.

Seam carving is one of those concepts out on the periphery that haven't quite made a match but that I am confident can fit well into the grander scheme of probing through frames and optimizing (or extracting) duplicates.
This isn't to say this isn't already being done.... I'm just acknowledging my understanding of the process involved might be catching up.
Seam carving is that technological approach (mostly in scaling) where important objects are identified and then the less important space around those objects is extracted... removed in the case of downsizing and increased in the case of adding more data into that gap. It's how digital photographers and editors can remove that interloper you don't like from a family photograph with the end result looking just like it was taken that way in real life.

Where things get particularly interesting is when we take what is normally processed on surface of a single flat plane and use the process in a volume of space and time.

Here's a somewhat random blog entry from someone who thought seam carving would be too hard to program into any of his projects:

http://eric-yuan.me/seam-carving/ (Note: This article is from 2013 so additional uses and understanding of seam carving have been made since then)

The actual code and the understanding of how it works intrigues me but before I start to settle in I already find myself distracted by the fact that a seam carving map already exists in A:M via the Channel Editor (i.e. Timeline).
I have to pause for a moment when I consider just how timely this discovery is to (potentially) understanding more about the temporal framworkings under consideration.
In the Timeline or in the Channel Editor we are actually seeing our seam carving data cutting across all frames in any given animation.
This is why when we scale a selected group of keyframes down or scale it up it doesn't break the keyframes it just condenses or expands the space inbetween that is deemed to be less necessary.

Returning for a moment to the topic at hand...
Where we can apply the same approach to rendering we gain some serious control over optimization.
Personally I think A:M already does this. We just can't see that process unfold and likely wouldn't know what to make of it if we did.
But I do see opportunities for better understanding how we as users can better understand what we have at our disposal so that we can best take advantage of it.



#Optical Flow I'll add this here as a note to self although it doesn't particularly apply to the above. The statement is made in another article on Eric Yuan's blog (see above):

All optical flow methods are based on the following assumptions:

  • Color constancy (brightness constancy for single-channel images);
  • Small motion

Where this does apply is in considering that we don't want or need to sample every color (as mentioned before with millions of colors to choose from)because we can immediately scale that problem down to RGBA channels where 255 gradients steps can be used to illuminate the way forward (via brightness). A similar thing can be seen with motion as large motions or the finer motions of distant objects might initially be disgarded because they won't be seen from the POV where our calculations start. Something to also consider is that visual language (phenomenon?) that suggest slowly moving objects are in the background faster objects are more often than not perceived to be in the foreground. We might use that understanding (and movement in general) to identify and categorize objects.

Link to comment
Share on other sites

  • 3 weeks later...
  • Admin

I ran across several things related to some of the above musings and want to add one here because it's trivial enough that I'm sure to forget it.

 

The thought is that now that A:M is set (by default) to save a Project just prior to rendering...

It could (at least theoretically) be made to compare that saved Project to a Previously saved Project.

The difference between those two Projects could therefore inform the renderer if anything of significance has changed.

 

Some classifications and tests would need to be trialed but this coupled with channel data could narrow the field considerably with regard to the emphasis needed for the renderer.

 

I'm trying to imagine what success might look like and I confess I'm not seeing that clearly.

I suppose we could start with a dumb user like me who just simply forgot he just rendered a Project and tried to render it again.

This might be Class 0 where nothing has changed and since nothing has changed the previous render is fetched and presented.

 

I will admit that case would be very rare.

But Case 1 might be where only one thing has changed... say... last time I rendered out to TGA quality but this time I want to render out to VGA.

So, although no data in the Project itself has changed the renderer's requirement has changed.

The renderer, knowing that only the desired resolution has changed might prefetch the previous render to display as a preview and update that upon the completion of each newly rendered frame.

A tagline would let the user know the image being displayed wasn't final.

 

Would such a thing be worthy of code? Probably not without many other optimizations thrown in for free.

 

But this suggests three basic categories of interest with regard to change optimized renderings:

1. Iterative Changes: The Project File differences (current and previous... for additional optimization the user can overwrite what is considered previous)

2. Internal Changes: Change occurring in Time and Space within the Project itself. The renderer knows that of 15 objects in the scene in front of the camera only one object is recording any movement therefore that object and what it interacts with receives the priority.

3. Renderer changes: In this category data is collected and improved upon with every rendering. Two primary values are recorded prior to rendering and those values are weighted after the render. The first value is determined (in boolean fashion) by what options are selected for rendering. Let's say that all of the values of the settings equate to 128 because almost all are turned off. This then is the ID of the underlying settings that drove the renderer. Subsequent renders are then compared first to other renderings with an ID of 128 and then to the those of other IDs. The second value is the weighted value that can be changed by other factors to include data derived from Cateogries 1 and 2 as well as results such as average render time per frame and over all render time.

 

One of the things this approach can inform is that of determining a desired render time before launching a render.

Let's say I want A:M to spend all of its resources for the next five minutes to render my scene.

Using the data collected from previous rendering it can suggest settings that will be optimal for me.

I can overwrite those but that weighting would have to be balanced somewhere else... or I'd have to increase my estimated time for the rendering.

And this is also where composting and masking might can come into play as well as split rendering where I choose what objects in the scene are truly necessary.

As I (optimally) want the results of my rendering back in 1 second per frame this time around.. I'm willing to sacrifice a few things.

And oh by the way... I know for a fact that these five frames are going to be exactly the same so... I'll indicate that as well via the enhanced Stepping options.

 

 

Okay, I'll stop there.

I was only planning on writing two sentences.

And I'm not sure I go those in... ;)

 

 

.

Link to comment
Share on other sites

  • Admin

Ohoh.... I was walking away from the computer and more thoughts came to mind on subjects I've long been interested in.

 

There is a general rule in computer processing that if and when control must be taken away from the user that control should be given back at the earliest possible opportunity.

Now, let me insert a disclaimer... this is not a critique of A:M's renderer taking over and holding on for the entire length of a render. That is simply part of the larger equation.

 

Yes, optimally, control should be given back immediately.

How this is best accomplished with A:M is through Netrender... so... mostly a non issue.

What I think represents considerable room for improvement relates to internal renderings in A:M.

Not to the renderings themselves but to the experience we have (or potentially could have) while rendering.

When the task of (internal) rendering is deemed important enough to launch a rendering session it is that experience that must be optimized and enhanced.

 

How can the experience of waiting for images to render be enhanced?

In many ways, the first of which is to maximize real estate and take full advantage of the rendering interface.

It's like an old TaoA:M guru saying, "When I am rendering I am rendering."

 

So, the first priority would be to maximize the render panels so that off limits areas (those that cannot be accessed while rendering) are not seen.

When I see my Chor window... right there... just out of reach.... I want to get back to using it!

 

The second priority would be to inform the user of what it is they are rendering (or have rendered).

This can take on many forms and currently there is useful information to be found... rendering times, render locations.

It would be useful to see the settings I set for the render.

It would be useful to see information about rendering (ala Info Properties).

Then I might say... "Hmmm... next time I don't think I'll do that" or "Aha! That's what I was missing!"

***

But right now we mostly just wait.

 

This isn't to say that waiting for a render is a bad thing!

Everyone needs to take the occasional break.

 

So the question might be... how can we better engage the user during that quality time spent rendering?

 

 

*** In time images created using specific settings could be presented to the user so they know what to expect. This is actually the underlying premise of the Presets and the iconic images that accompany them. A full screen Render Panel might leave the Presets panel active so that the settings could be examined. Returning to thoughts mentioned in the last post there might even be a means to compare the last Preset (last render setting) with the current preset (current render settings) so we get a clearer view of what we have changed.

Link to comment
Share on other sites

  • 3 weeks later...
  • Admin

Going deeper down the rabbit hole... a leaving a few notes so I can find my way back...

 

One path I don't expect to follow but it's interesting to see where it might go:

Many years ago a test of blockchain technology was attempted where individual pixels on a digital canvas were sold for $1 per pixel.

Because it's hard to see individual pixels groups of pixels 10x10 were sold for $100.

In this experiment those who participated 'truly' owned a piece of that digital canvas and could alter it or sell it to someone else.

Other similar experiments were conducted and while interesting that specific idea didn't take off... although I'm sure lessons were learned.

One similar project posted its source code on github so the inner workings of such can be explored.

 

But that path is a considerable diversion particularly for it's pay to play requirement although the concept of 'ownership' is indeed useful.

 

My thoughts turn to the underlying constructs of blockchains where 'ledgers' are concerned.

Further, the evolution of exposure sheets and how they arose from the ledger sheets of old.

 

But before going on it may be important to state that the current trend in blockchain is away from 'proof of work' for a number of reasons.

The primary one being that of power consumption (which has been detailed in Robert Holmen's Bitcoin topic).

I won't press into that any further here except to say that in many/most cases proof of work is unnecessary.

This isn't to say it isn't useful but the need must justify the related cost.

Additionally the speed of which favoring verification (decryption) over solving (full decryption) can be a useful construct.

 

At this point, one might be wondering (as should be expected) what this has to do with rendering.

There are several useful concepts that can be extrapolated into the realm of rendering and playback of stored data.

Some of this fits more squarely into the area of compression algorithms and such and the differences between blockchain approaches and that of compression should be explored.

 

In the case of the experiment highlighted above a single canvas of pixels was produced and then the owners would adjust their pixels as they saw fit.

These adjustments then change the view of the current canvas but the history of every pixel state is preserved.

This history is immutable as it is factored into the current state of the pixel. (like a never ending compression loop looking for patterns to reduce and leaving a key that leaves a path should a backtrace be necessary)

 

At any rate, where the players in this game are known (likely by aliases/labels) they provide a means to identify frames of reference and what is seem from their vantage point.

This then gives us more incentive to consider exposure to a given ledger where points of view can be overlaid to produce a composite result.

 

An owner might state they own a set group of pixels within one frame of reference but also claim a different set on another.

We can therefore compare the differences to verify where changes have occurred.

We may not initially know who owns those changes so we refer to our ledger who never forgets a transaction and then determine the owner.

 

In rendering this all can occur very quickly with Red claiming a share of the temporal canvas... "I own all of these pixels through all frames of reference!"

Green might want to buy in on a few of those also while claiming some elsewhere.

Blue does likewise and productivity results.

 

An issue with current rendering approaches might therefore be that every pixel is mutable and stores no history of prior state.

With each rendering the process starts anew even though it might not ever change value or ownership.

 

The concept of Multipass surely rises above this deficit for a moment but at some point 'extraneous' data is discarded and potential gains are lost.

Needless to say, this makes me very curious about A:M's internal RAW format and the actual point in which that data is released.

If none of it is passed on for use in subsequent framings yet to be rendered then how best to measure that cost?

 

Added: It has been demonstrated that blockchains are not 'immutable' but rather 'tamper resistant'.

But within systems where mutability can be seen as advantageous there is little need for the expense related to proof of work.

End states (or slices in time) are important but only for the briefest of moments.

Link to comment
Share on other sites

  • Admin

Here is a slice in time of where the art of rendering was measured to be in spring of 2017.

It delves more deeply into rendering itself by it's other nom de plume 'image synthesis':

 

The various lecture slides can be accessed via the index at the bottom:

 

http://graphics.stanford.edu/courses/cs348b/ (Link to Stanford lectures)

 

 

It should also be noted that blockchaining of distributed rendering is already a thing as demonstrated by the folks at Otoy (Link).

They are pursuing one model from a larger set of approaches... I'm not sure they are specifically attacking the same things I'm after in predictive rendering but they are very likely gathering a large amount of data that can inform that approach.

 

Added: It has been said that even PIXAR's Renderman approach discards all information and starts anew with each frame.

They obviously know what they are doing but this is very much not where I'm heading in my various wanderings.

Link to comment
Share on other sites

  • Admin

I haven't delved far into the history of it but the term 'predictive rendering algorithm' came to mind so I typed it in to Google.

 

This paper from back in 1996 was high on the search list and gives a place to measure from to see where the idea has evolved.

Without knowing more I will postulate it largely followed the path of real time rendering...

 

https://www.cs.ubc.ca/labs/imager/th/1996/Fearing1996/Fearing1996.pdf

 

There are some concepts that intrigue me... more labels to me at this point.

These include concepts such as 'frameless rendering' or 'Chaning Motion Thresholds'.

Link to comment
Share on other sites

  • 1 month later...
  • Admin

With regard to rendering multiple frames at a time...

 

Here's some research that in loosely based way suggests an approach to projection/prediction of frames in temporal space.

 

http://www.cs.cornell.edu/~asaxena/learningdepth/NIPS_LearningDepth.pdf

 

Here's a somewhat related application that follows the general idea:

 

https://github.com/mrharicot/monodepth

 

 

The label given to the process of determining depth from a single still image is "Monocular Depth Estimation".

Of course a renderer isn't going to do that... that wouldn't be practical or optimal... but the basic process used mirrors that of raytracing/pathtracing as an underlying framework.

Each shot into temporal space rendered reveals more about the volume within the framework of frames/slices along the path each tixel takes.

That same tixel can record its journey over its allotted lifetime, as with any particle shot out into space.

 

I would imagine the standard two sensors would be RGB and Alpha.

Where the former represents a hit in temporal space (that registers the albedo of the surface and then returns to the origin after collecting required data (such as angle of deflection etc. which will not always be followed but stored for future reference). The ray then returns to the origin in the most direct path (which unless the receive is otherwise placed should be along the same path in which previously cast). An anticipated and well known measure would be that of rays that travel directly from origin to Alpha. This is a known linear distance based on a given frame range. If all rays in a 24 frame range reach Alpha then that space are rendered for all 24 frames; there is nothing (no object) occupying that space.

 

Now again, because of channels and keyframes we don't need to cast any rays. The channel keys and splines already identify objects, orientations and movement in that space.

So we can already predict and project where objects will be rendered volumetrically in temporal space.

Link to comment
Share on other sites

Join the conversation

You are posting as a guest. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...