Jump to content
Hash, Inc. Forums

Hash Animation Master Realtime


Rodney

Recommended Posts

  • Admin

Background HAMR (Hash Animation:Master Realtime) and WebHAMR (HAMR for browsers) was an effort of several years ago (2009) to press A:M content out to new platforms. From the basic write up:

WebHAMR is a web browser plugin that allows interactive viewing of 3D animated content produced with the Hash Animation:Master modelling and animation program. "HAMR" stands for "Hash Animation Master Realtime" and refers to the HAMR programmers API that allows applications to be developed that can make use of the Animation:Master file loading and rendering capabilities. The results are rendered in "realtime" equivalent to the A:M shaded rendering mode, which although not of the movie quality rendering mode of A:M is still perfectly acceptable for many realtime applications.

The ultimate goal of HAMR was of course to push A:M content to web browsers but the API allowed other access as well (as demonstrated in the standalone HAMRviewer). Some general thoughts: - HAMR tech still works quite well although I don't know the extent of whether the API is used by anyone and some connectivity has been depricated over the ensuing years. - HAMR tech didn't get out of the initial development phase (as such it didn't get to the Mac port phase) - The Mac port and some modern updated features notwithstanding... the standalone viewer (PC only) still works great and as it is now could be included with the current distro as an alternate means to view Projects/Models outside of A:M. (barring that it is still available via the Hash Inc FTP so can be added to your toolbox and even given to other PC users who may not have access to A:M to view A:M content) With interest in development for A:M continuing I'm wondering if the powers that be would consider releasing some of the HAMR code so that a new generation of developers can take a crack at it. Now, some folks might point out that A:M dropped the compressed format .PRJB (a zipped/binary version of a project file which was used with HAMR) from it's menu several versions ago but I would counter with the fact that HAMR viewers can just as easily use uncompressed .PRJ files and even load individual models and actions. In fact, without even launching A:M one can populate a scene, play back canned actions and freely roam around. To sum this up... HAMR is dead. Long live HAMR. :) REF: Direct link to HAMRviewer exe on the Hash Inc FTP (yes this is an .exe so standard downloading rules apply... when in doubt download first and then scan it)

 

HAMRminitour001.mp4

Link to comment
Share on other sites

  • Hash Fellow

A while back i was thinking to myself that someone might be able to use WebGL, which is in most browsers now, to eliminate the need for downloading the HA:MR plugin to view HA:MR content.

Link to comment
Share on other sites

  • Admin

A lot might depend on the ability to release code.

I'm not sure if the API/SDK still supports the HAMR elements as that may not have been an area that Steffen was involved.

Is the code available... is it not... I dunno.

 

Here's another mini tour of the standalone viewer moving around a few scenes from the Extra CD.

Note that the error messages pop up are for textures that aren't supported by HAMR (those messages could be suppressed by default)

Of interest... HAMR does display particles quite well, to include spriticle images and particle hair. ;)

 

I would say that given some of A:M's current capabities HAMR is even more powerful than it was when first released.

HAMRminitour002.mp4

Link to comment
Share on other sites

  • Hash Fellow

Here's a tutorial on simple animation in WebGL

 

https://developer.mozilla.org/en-US/docs/We...ects_with_WebGL

 

 

As I see it, what you'd want to do is get A:M write that sort of HTML-like code, but instead of describing and moving one square it would describe and move all the patches of your model. Simple, right? :D

Link to comment
Share on other sites

  • Admin
As I see it, what you'd want to do is get A:M write that sort of HTML-like code, but instead of describing and moving one square it would describe and move all the patches of your model. Simple, right? biggrin.gif

 

A lot more simple if the current HAMR code is compatible with that. ;)

 

I recall being very impressed with HAMR's ability to render particles in real time.

The one in particular I recall was one with magic pixie dust but... I can't find that one.. so here's the basic TaoA:M Smoke, Wind and Fire setup:

HAMRminitour3_particles.mp4

Link to comment
Share on other sites

  • Admin

Not impressed you say.

 

Here's something of an essential non-hack that extends v18's capabilities with the HAMRviewer to allow specific objects in a scene to have interaction that other objects don't.

In this video I'm just testing that the kettle and fire haven't been released to move/rotate whereas Keekat has.

I'll have to check to see how best to lock down an entire scene to avoid unintended movement (i.e. the kettle and fire can be moved if the entire set is move... but note that Keekat moves too).

 

Keekat should also be poseable (!) but I don't think the viewer was ever coded for that while the webviewer was.

Note that (to my knowledge) the webviewer isn't compatible with any modern browser... bummer... so I'm only playing with the standalone viewer.

 

At any rate, the way to get this additional functionality into v18 is to copy/paste the HAMR.hxt file (which is found in the HAMR installation folder) into the v18 hxt folder.

This exposes objects to additional options in a similar way/location as the Newton plugin. There are four elements this exposes (Moveable, Turnable, Scaleable and Poseable of which the latter two appear to be limited in the standalone viewer)

 

And while I haven't delved deeply, technically you don't need to set those special attributes for the viewer in A:M.

You could add them with a text editor just prior the the end of the desired bone:

 

ModelMoveable=TRUE

ModelTurnable=TRUE

ModelScaleable=TRUE

ModelPoseable=TRUE

Properties>

 

Or set them to FALSE as appropriate.

Extending_v18_functionality.mp4

Link to comment
Share on other sites

  • Admin

I was kind of hoping all the A:M programmers who have ever worked on A:M (to include plugins!) might come together for a 25th anniversary summer of code but... perhaps we'll have to wait for the 30th anniversary. :P

Link to comment
Share on other sites

  • Admin

Similar to movement in A:M the following keys control the view:

 

The following number keys can be used to change the view just like in A:M as listed:

- '1' or 'C' camera view

- '2' front view

- '4' left view

- '5' top view

- '6' right view

- '7' birds eye view

- '8' back view

- '0' bottom view

 

After any of these mode movements, the user can return to the original "camera view" by pressing the '1' key on the keyboard, either above the letter keys or in the numeric keypad.

 

 

Some standard navigation/interface instructions:

 

Moveable- If turned "On" allows the WebHAMR user to left click on the object and drab it around in the scene in the ground plane or shift-left click to drag it around in the screen plane.

 

Turnable- If turned "On" allows the WebHAMR user in "Turn mode" to left click on the object and rotate it in any direction or shift-left click to rotate it around its axis.

 

Zoomable- If turned "On" allows the WebHAMR user in "Zoom mode" to left click on the object and zoom it or scale it relative to the rest of the scene.

 

Poseable- If turned "On" allows the WebHAMR user in "Move mode" to click and pose bones in a model. If the bone is part of an IK chain, the bone will be translated with IK constraints applied. If the bone is not part of an IK chain, the bone will be rotated. If CTRL-left click, IK chain end translation or bone shaft rotation occurs. If Alt-left click, IK chain pivot translation or bone roll rotation is applied. Yeah, this is a bit confusing but play with it a bit to get a feel for it.

 

WOOOHOOOO! It works!!!

Life is sweet.

Link to comment
Share on other sites

  • Admin

More tips:

 

The way to manipulate lights in the scene is to add a light or lights into a Model.

In order to be able to see that light (in order to be able to select and manipulate it) you really need to create a small piece of mesh/geometry (that is assuming the Model is otherwise empty).

Then you can select it, then position and turn the light by moving/turning the model as needed.

 

From my initial tests the viewer doesn't seem to care for Action Objects.

Link to comment
Share on other sites

HAMR is one of the most amazing tools in A:M... it has/have a great potential. I remember a month contest with it a long time ago and the projects were amazing, my entry was a mechanical flower with a lot of action and sounds. Were these projects lost ?

Am i dream or was possible to use some scripts to control it ( ruby or phyton ) ??

Link to comment
Share on other sites

  • Hash Fellow

Ken Chaffin on LinkedIn

 

It says he's "Director, Texas Tech University Libraries 3D Lab, Digital Media Studio, Media Lab (research)"

 

I gather he has zero interest in working on HA:MR now but perhaps he might have a student for whom this would be an interesting project?

Link to comment
Share on other sites

I haven't seen much about using OpenGL on a webpage but think it's a very good concept. JOGL is what I found other than WebGL, but a lot of those things just seem way over our heads, but then, there's nothing there to show when you really look at it. Not that I'm trying to insult or anything, far from it, but I haven't seen anything like the DirectX8.1 SDK. That was just perfection, although very demanding to learn, not to mention overwhelming with potential things one could never learn all of. But from what I can interpret, it was the same as OpenGL, but then combined with Input, Sound, Music, and some other stuff. And it was free! But that only goes so far, I think. If you get it to do something, you have to remember who put the canvas and paint in front of you. But I think Microsoft did a great job there. Google's Android Java can import OpenGL (dll's), but converting to just Java on a webpage doesn't look like a simple task at all. In fact, it looks like going back to programming devices like sound cards and graphic cards, then emulating which one is on the end user's computer...probably with some boss breathing down your neck cause it's so competitive...heck with that! But I'm not an expert, (or even mid-level) nor have I ever worked in that industry...but I just look in the mirror and say "I'm good enough, I'm smart enough, and doggonit, people like me."

Link to comment
Share on other sites

  • Hash Fellow

As far as DirectX vs. OpenGL in A:M... I think OpenGL had the advantage of being both Mac and PC.

 

Maintaining one GL for both OS's is more economical than maintaining one GL for both plus one GL for just PC.

 

That's my understanding of it.

 

However, HA:MR is outside that equation since it is about what a browser can do rather than what the OS can run.

 

WebGL seems to reproduce much of the displaying power of the HA:MR plugin and it's already built-in to most browsers now without the need to install a plugin.

 

you can see a webGL version of one of A:M my models here:

 

turn around a T-Rex

Link to comment
Share on other sites

  • Admin

Similar to your effort with webGL my focus is primarily on what we can currently do.

Since everyone kind of wrote off HAMR I thought it might be worth pointing out that it can still be used.

 

To my knowledge HAMR as a web technology is not going anywhere soon so I'm not suggesting THAT be developed.

There is also the issue of MFC classes that should be updated (which if updated might just help Steffen in that he wouldn't have to take on that task).

 

The viewer on the other hand demonstrates working technology and anything that can display A:M models in action is worth considering even if it's just as a way to learn how to speak A:M in programming. I believe I've gone on record as suggesting that A:M itself could become an A:M content viewer once it's subscription activation runs out. There is after all no better program to view A:M files than A:M itself. I realize that such a thing would be a very tall order so I'm not expecting anything like that to happen.

 

But... HAMR is working. And I'm thankful for that. :)

Link to comment
Share on other sites

I like the name HAMR! I picture a lot of stuff getting smashed and a hollow ringing coming after that!

 

I think translating A:M actions into a 3D environment that can render in real-time is where it's at. Nobody could look at the A:M software and not think, let's start from there.

 

I had trouble (I mean fun!) trying to put the actions from A:M into a 3D environment (they flew off the screen as posted a long time ago), with AM2ex from Obsidian Games back then. I do like the idea of using OpenGL on a website because the wave of smartphones still uses the "internet" on WiFi. So adapting to that is my goal, but with a multi-layered adjustable frame rate video player that takes Young's slit modulus to the next level, then!, puts a 3D layer in there! Like it wants to dance with the video. But interactive too! so people can type "I'm bitchin" into it like on HeadBook (oops.sorry.notTherYet.U.say.)

 

Actually my goal is to do diddly-squat, but not have to worry!

Link to comment
Share on other sites

  • Admin
Why would anyone want a plug-in for 3D content browsing these days?

 

 

That almost sounds like a rhetorical question.

My answer would be; it depends on what the plugin could do with 3D content.

 

This is a huge oversimplification but most 3D plugins seem to be focused three areas these days, realtime rendering, resource/asset management and 3D printing.

Content browsing primarily resides in the latter two but fits well with the first in that realtime display of the content is often desired to improve the user experience for programs addressing those other two platforms.

 

The downside of plugins is that by their very nature they have been historically proprietary. Each plugin having to bridge very wide chasms between otherwise incompatible technology.

That almost would be a fourth reason for a content browsing plugin but I think it's already covered by one or more of the first three. HAMR is certainly no exception in the area of being proprietary but one must also remember that at the time no one was making spline compatible 3D browsers. So that would be the fourth reason to invest in a some form of plugin for viewing 3D content; to bridge significant technological gaps.

Link to comment
Share on other sites

  • Admin
By the way, what's the point of viewing 3D content that isn't a game, a rendering or a video?

 

 

I'm afraid I don't understand the question.

The term 'rendering' is also a bit too much of a catch all category.

For instance, does your definition of 'rendering' include the process of scanning real world objects? Technically that's not rendering so you could add that to your list.

 

Why view anything?

 

I ask this question is all sincerity because it ties in with your previous question.

This is the driving force that causes us to want to develop .

 

If we assume by 'rendering' you mean capturing data on a 2D plane or ostensibly in 3D virtual space then the term rendering could cover almost anything.

But rendering, in older terminology is more akin to output than input in that it is simply adding form to a shape via light and shade.

 

I've already mentioned 3D printing and that can be considered an alternative means of rendering; extending the view of virtual objects into real three and even four dimensions of time and space.

 

What did people do before the advent of modern day gaming?

What did they do before watching television?

What is the point of innovation?

 

I'll offer a scratch at one possible answer: to go beyond being actively or passively entertained and be creative.

Link to comment
Share on other sites

 

the fourth reason to invest in a some form of plugin for viewing 3D content

 

By the way, what's the point of viewing 3D content that isn't a game, a rendering or a video?

 

 

Architectural walk-through? Being able to render a scene from a favorite animation from a different angle? Learning body mechanics by viewing an animation from all angles? I'm sure there are a lot of possible uses.

Link to comment
Share on other sites

  • Hash Fellow

Here's a pretty impressive Fake SSS effect rendered with WebGL.

 

http://alteredqualia.com/three/examples/webgl_materials_skin.html

 

 

It's not perfect and it's just a trick but if someone could figure out that trick and add that trick to A:M that would be cool. YOu wouldn't need WebGL, just the math that makes the trick work.

Link to comment
Share on other sites

I'll offer a scratch at one possible answer: to go beyond being actively or passively entertained and be creative.

 

Ah, but isn't that a reason to use an actual editor? You can't really get creative with a viewer.

 

 

Architectural walk-through?

 

Good point. But that's best done with a game engine, so it might be loosely included in the games category.

Link to comment
Share on other sites

  • Hash Fellow

If this could do what one might do with a game engine but without all the conversions and middle steps to get there that would be a handy thing.

 

I admit it is not a frequent need for most 3D users in the way that regualr renders are.

Link to comment
Share on other sites

Have you guys heard of Three.js and Babylon.js? Both are able to display 3d inside a browser window without a plug-in being installed. You will need a browser that supports HTML5 and a graphic card that supports a newer version of OpenGL though. Not sure if they are compatible with phones and handheld devices. But worth a look ...

http://www.threejs.org
http://www.babylonjs.com

Link to comment
Share on other sites

i am using three.js and it is nice. hamr was nicer though because it could use spline data and calculate the resolution based on your hardware and settings on the fly.... a real killer feature if you ask me especially since the filesizes of mdls and prjs by A:M are way smaller and like that better suited for mobile users (and at that quality even desktop users...)

 

but anyway it would be very cool to write an importer of mdl data for three.js

 

i am using it with objs till now...

Link to comment
Share on other sites

  • Hash Fellow

. hamr was nicer though because it could use spline data and calculate the resolution based on your hardware and settings on the fly

 

Are there any open-source game engines?

 

If you could take the spline-interpreting part of HA:MR and insert that into the animation functionality of a game engine that displayed with WebGL then you could use your A:M assets directly in your game instead of having to convert them into weird format and losing all their special stuff.

Link to comment
Share on other sites

Are there any open-source game engines?

 

Yes there are. Notably, everything by Carmack up to and including id tech 4 (Doom 3). Ogre 3D is a well-known real-time rendering engine.

 

 

all their special stuff

 

If you mean smooth surface interpolation, I think we're going to see it in games thanks to Pixar's OpenSubdiv pretty soon. But if you specifically mean A:M's proprietary interpolation technique, it doesn't stand a chance.

Link to comment
Share on other sites

  • Hash Fellow

 

 

 

all their special stuff

 

If you mean smooth surface interpolation, I think we're going to see it in games thanks to Pixar's OpenSubdiv pretty soon. But if you specifically mean A:M's proprietary interpolation technique, it doesn't stand a chance.

 

 

 

It's just math, math that worked ln HA:MR. Someone takes the routines in HA:MR that handled the conversion from A:M splines to polygons and translates them to feed the game engine. They write it as a plugin or add-on or library or whatever one does when one adds capabilities to an open-source program.

 

That's not trivial, but it's something a capable programmer could be recruited to do.

Link to comment
Share on other sites

  • Admin
if you specifically mean A:M's proprietary interpolation technique, it doesn't stand a chance.

 

 

 

And what exactly do you think SubDiv and such are pushing toward?

Funny thing about techniques that don't stand a chance... like all good ideas... they keep coming back.

 

Keep in mind that the whole HAMR endeavor was an effort to degrade splines to where they could be viewed by graphics cards optimized for rendering polygons.

A full solution would bypass that unnecessary process and directly interpret the splines so they can directly viewed on all platforms.

If that is the proprietary interpolation technique you refer to then we are in full agreement.

Link to comment
Share on other sites

  • Admin
i am using three.js and it is nice. hamr was nicer though because it could use spline data and calculate the resolution based on your hardware and settings on the fly.... a real killer feature if you ask me especially since the filesizes of mdls and prjs by A:M are way smaller and like that better suited for mobile users (and at that quality even desktop users...)

 

 

Yes, a survey of available tech still falls very short of what HAMR could do many years ago.

I tried to look into three.js and similar approaches but they are not (yet) very intuitive and quite frankly painful.

In order to even move in the right direction you have to hang up your artistic license and focus on being technician.

Not an ideal situation (we should only need to wear the technician hat when it is ideal to do so) and that certainly is not A:M's approach which wherever possible leaves the technical stuff in the peripheral view

 

Still there is movement in the right direction and more bridges are being built.

That's a very good thing.

Link to comment
Share on other sites

 

 

And what exactly do you think SubDiv and such are pushing toward?

 

Catmull-Clark subdivision surfaces. It's also a technique for smoothing out lightweight geometry, but there's an important difference: everyone already knows SDS, everyone already uses SDS, everyone already has tons of SDS-compatible models, and every software can produce them.

 

Link to comment
Share on other sites

  • Admin

I must assume you meant to say 'everyone' that matters. ;)

I was curious if you wanted to share where you thought the technology was heading in future iterations.

SDS isn't moving toward SDS... it is SDS... and it certainly isn't end of life yet.

SDS isn't in competition with splines/patches either. It simply brings polygonal surfaces more in line with continuous splines and smooth patches (nonlinear toward linear) and the industry has greatly benefited by this.

 

There are a few things to consider with regard to SDS:

You know a lot about this already but I'm posting the following here for general discussion's sake.

 

Tri-mesh/quad-mesh

Different subdivision schemes are used for triangular meshes and quad meshes. (no surprise there!)

The schemes/algorithms are specifically optimized for each because they can ignore/bypass processes that aren't needed... resulting in leaner code... quicker processing... less waste)

Quad meshes can be pre-processed as tri-meshes which in turn can be processed with tri-mesh schemes.

Note that this is one of many reason why quads are considered superior to tris because a quad can be processed with either type of scheme while the reverse (adapting tris to quads) cannot. (well they can but in my view it's a lot like comparing bitmaps to vectors)

Pre-processing of tris (from quads) can lead to artifacts as the splitting of quad faces cannot be generalized (i.e. without user input the computer must use a 'best guess').

 

So, tri-meshes tend to be inferior both coming and going.

 

Interpolating/approximating

The various types of meshes are usually defined by sets of control points.

Subdivision schemes can smooth the shape of the mesh by inserting new vertices into that mesh.

If the original control points are moved the SDS smoothing scheme generally approximates the shape.

If the original control points are maintained the SDS scheme generally interpolates the shape.

With polygonal meshes, approximating schemes tend to be more flexible and produce smoother surfaces but are more difficult to modify into a specific shape because the final shape no longer passes through the original control points (in fact those original CPs may no long exist).

 

Face/Vertex splitting

Face splitting schemes (primal) split polygonal faces into many (but optimally four) new faces.

Vertex splitting schemes (dual) insert new vertices for each face.

Note that dual schemes tend toward slower processing than primal schemes.

 

The Catmull-Clark focus is toward quads.

So what does this mean for the future?
The biggest benefit to the industry in adopting SDS was a new emphasis on technology that pushed users toward quad meshes.
So the question is still out there for everyone, even those who at present don't particularly matter, to guess: Where is Open SubDiv and similar technology moving the industry next?
Link to comment
Share on other sites

SDS isn't in competition with splines/patches either. It simply brings polygonal surfaces more in line with continuous splines and smooth patches (nonlinear toward linear) and the industry has greatly benefited by this.

Precisely. And it's nothing new: RenderMan subdivision surfaces have been based on polygon-like primitives from day one. What's new is that today the subdivision can be hardware-accelerated thanks to shader-based tessellation. In fact, A:M's version of Coons patches can also be implemented with shaders, but no one has done it as yet. Correct me if I'm wrong, but all spline modelling and animation software has done patches in software so far, including the flagship, A:M.

In this topic, HA:MR's ability to display patches is regarded as an enormous feat, but it really isn't. HA:MR does the same thing as A:M itself and a couple more (now deceased) programs, including the open-source Jpatch (in case anyone's after the sources of a complete implementation). If anyone is interested enough, they can go ahead and do a hardware-accelerated implementation in modern OpenGL (or D3D). That would really be a step forward.

 

Different subdivision schemes are used for triangular meshes and quad meshes. (no surprise there!)

I don't know about this. I'm aware of two subdivision schemes: Catmull–Clark and Doo-Sabin. Neither is particularly concerned whether your primitives have three, four or twenty-nine points. They are general enough for all topologies. However, Catmull–Clark is notable in that, when done recursively (which it doesn't have to be, as demonstrated by Jos Stam), the very first iteration produces all quads, let alone the subsequent ones.

 

one of many reason why quads are considered superior to tris

Another very important reason is that quads go hand in hand with edge loops, which are an amazing surface flow control and mesh navigation tool. Edge loops are the bread and butter of subdivision modelling.

 

If the original control points are maintained the SDS scheme generally interpolates the shape.

In many situations, using a full crease in your SDS makes your points stay put. Admittedly, you have to be careful where you crease.

 

Where is Open SubDiv and similar technology moving the industry next?

I'll venture a guess. The industry will probably move further away from manual topology management into pure digital sculpting territory (think ZBrush). Geometry detail is no longer an issue, and enough is known about topology adaptation that most, if not all, of it can be done automatically. As regards rigging, I suppose we'll witness the emergence of several bone-and-muscle systems that are internally based on lightweight implicitly generated meshes.

Link to comment
Share on other sites

 

the fourth reason to invest in a some form of plugin for viewing 3D content

 

By the way, what's the point of viewing 3D content that isn't a game, a rendering or a video?

 

Years ago I was developing my website totally in 3D using HAMR...where users could walk through a labyrinth composed by images, videos and sounds

Link to comment
Share on other sites

Yes, it is hard... anyway I loved the way hamr did it... very small filesizes at stunning resolution-possibilities even a few years ago without CUDA and OpenCL-based approaches.

Today polygone-based approaches (source-material... hamr used polygones to display stuff too) slowly get in the same direction, but it really was a step ahead of its time...

 

The infrastructure was really the only reason why it did not spread widely, if you ask me... I really do not like this at all... it should be about best techniques, not about market shares... (I know this is unrealistic, but it is not the best way we could go...). Subdivision-Surfaces are not the same approach, since you just dont know exactly how your model will look before subdividing it. That is the biggest difference. Smoothing afterwards is never as good as working smooth from the beginning with... today it gets better and better, but it really was a bad thing that is took so long and that we are today at the same quality we could have reached years ago...

Link to comment
Share on other sites

you just dont know exactly how your model will look before subdividing it

You do, because you don't model "before" you subdivide. You display your subdiv and your poly proxy simultaneously.

 

 

we are today at the same quality we could have reached years ago

By quality, do you mean resolution? A smooth model is not necessarily a good model. There are telltale signs of sloppiness whether you are working with splines or subdivs. A shoddy subdiv model looks shaved out of shape, and a shoddy spline model looks crumpled. It takes effort and skill to make the best use of your preferred choice of surface type.

And the attainable resolution in real-time rendering hasn't really depended on the surface type all these years.

Link to comment
Share on other sites

 

you just dont know exactly how your model will look before subdividing it

You do, because you don't model "before" you subdivide. You display your subdiv and your poly proxy simultaneously.

 

I would not know how to split an edge (for instance) in any reasonable way without decreasing subdivision levels again. At least the last time I tried that in Softimage I was constantly changing the subdivision level over and over again till I got what I wanted. I have to admit, that that was maybe 3 or 4 years ago, but it really was a pain... I would have acustomed to it for sure, but since I was used to get exactly what I wanted at any resolution I wanted it really was less good in that aspect.

 

By quality, do you mean resolution? A smooth model is not necessarily a good model. There are telltale signs of sloppiness whether you are working with splines or subdivs. A shoddy subdiv model looks shaved out of shape, and a shoddy spline model looks crumpled. It takes effort and skill to make the best use of your preferred choice of surface type.

And the attainable resolution in real-time rendering hasn't really depended on the surface type all these years.

 

Not necessarily, but there are situations when it would have been exactly that: Better. No longer having to use proxy-models or animated subdivision levels can be much easier. Always the right resolution you want, exactly as you wish it should look (not subdivided and looking slidly different than before because the smooth-algorithm is not exactly doing what you would think it would do... don't get me wrong: There are situtations in which SubDs can be very helpful and no doubt: It takes a master of the art in both ways to model to create great models, but it is not all better with SubDs like many try to tell you when argumenting about Splines/Patches/NURBS/SubDs... they have disadvantages as well as splines do have some. But splines & patches (especially Hash Splines&Patches) are in some situations much better to be used. As always it all depends on situations...

 

I am not really talking about real-time rendering here but about final rendering with CPU&GPUs and RAM amount, filesizes at super-high-resolution-polygone-models, etc.

 

See you

*Fuchur*

Link to comment
Share on other sites

There are situtations in which SubDs can be very helpful and no doubt: It takes a master of the art in both ways to model to create great models, but it is not all better with SubDs like many try to tell you when argumenting about Splines/Patches/NURBS/SubDs... they have disadvantages as well as splines do have some. But splines & patches (especially Hash Splines&Patches) are in some situations much better to be used. As always it all depends on situations

True. And it's fortunate this thread isn't descending into a splines-über-alles polygon-bashing smugness fest.

 

 

I would not know how to split an edge (for instance) in any reasonable way without decreasing subdivision levels again.

Huh? You just split it. Leave the derived subdiv be.

Link to comment
Share on other sites

Huh? You just split it. Leave the derived subdiv be.

 

 

With subdivision applied, I could not easily find the edge I wanted to split for instance. I could have used a viewing-mode with both (smoothed and not smoothed) surfaces, but that is really just adding more lines (possibly many more) to your viewport and was not really easy to handle. So I hit + / - over and over again to see what a splitted edge in the low poly-model would do to the high polymodel in the end. Maybe I was using it the wrong way, I don't know.

 

True. And it's fortunate this thread isn't descending into a splines-über-alles polygon-bashing smugness fest.

 

 

Does not really make sense to get into that kind of stuff. Never really helps anybody and I doubt there is a "true" answer to such a question anyway... in the end it is all a Glaubensfrage... (just to ask: Are you German? ;) Kommt ein bißchen so und die amerikanische Tastatur hilft nicht gerade beim "Ü" finden ;))...

Link to comment
Share on other sites

My Point of view from a technical perspective is that spline and patch modeling and rendering are a subset of the subdiv category. This has been demonstrated. Hash patches need to be subdivided into triangle meshes to be rendered. This is true for the ray-tracer and for the real-time renderer, HA:MR or not. Hash patches share many characteristics with Sub-D and they use the same mathematical Framework for subdivision except they use different basis functions for the subdivision and they use different constraints when subdividing n-ary vertices.

 

From an artistic point of view, I don't see how quad modeling could be considered as superior to tri modeling in the absolute. There are so many situations where the quad constraint is detrimental to good modeling, Any sufficiently detailed anatomical model for instance. But at the same time there are so many situations where quad modeling simplify the modeling task so much like for architectural and many furniture models.

 

Where is SubDiv going next is a rhetorical question. Wrong focus. Sub-Div doesn't need that many more innovations. The industry is mature and research focus is not on those issues anymore. This is why Disney opensourced their sub-div. No competitive advantages anymore.

 

No matter the technology used, in the end, it is the artist behing the screen that makes the difference.

 

Concerning porting HA:MR to todays browser environments. This would be a huge undertaking. HA:MR was designed in the days of OpenGL 1.1-1.5, the so-called "Fixed Function Pipeline". Converting that code to OpenGL ES 2.0 with the programmable shader model would be a major task. In addition to that, the browser industry is shifting toward a plugin-free environment. Exit plugins, enter apps. Google have already ditched Netscape plugins in favor of PNaCl applications, Firefox is looking at an alternative like asm,js and Microsoft IE 11 was already supposed to be plugin-free Under Win 8 but MS reverted this decision ... for now.

Link to comment
Share on other sites

  • Admin
Where is SubDiv going next is a rhetorical question. Wrong focus. Sub-Div doesn't need that many more innovations. The industry is mature and research focus is not on those issues anymore. This is why Disney opensourced their sub-div. No competitive advantages anymore.

 

 

Well put.

 

spline and patch modeling and rendering are a subset of the subdiv category

 

 

From a non-technical view (not constrained to operating with limitations of current hardware reality) I see this in reverse because the lines and surfaces must exist before they can be divided, much less subdivided.

 

Hash patches need to be subdivided into triangle meshes to be rendered.

 

 

 

While the subdivision of patches is required for display via current hardware, if one were to look beyond the 2D displays currently offered that might offer some fresh perspective. But the fact remains that current display technology must be targeted.

 

Somewhat unrelated: I'd be curious if this display constraint has anything to do with the old methodology of using half of a pixel... not sure how/if that even applies.

 

This does also beg the question... when Hash Inc was working directly with graphics card makers to advance spline/patch technology was that similar to today's approach or was it something that would be still be considered novel technology?

 

Concerning porting HA:MR to todays browser environments. This would be a huge undertaking.

 

 

Translatng/updating the custom MFC classes alone would make such a port undesirable.

It will take considerable time and effort for technology to advance to where it can more fully exploit the elegance of splines and patches in virtureal timespace.

 

Added: It does help to note what type of subdivision surfaces are under consideration. Otherwise Catmull-Clark may be inferred.

It may be worth noting that after one round of Catmull-Clark Subdivision all surfaces are quads. It is only just prior to rendering (to graphics cards that require this) that all the quads are tesselated (I tend to say degraded) into tris. In Catmull-Clark subdivision (at least by DISNEY/PIXAR) the first round is considered a pre-process. The whole purpose of which is to convert/translate tris to quads.

Link to comment
Share on other sites

  • Admin
What do you mean?

 

 

Rendering directly from splines and patches.(and the word perspective here is meant to have a dual meaning as that approach would allow for rendering in more than two dimensions. i.e. holographics, immersive displays not built with modern day hardware, projection, 3D printing without print heads constrained to a single plane, etc.) In other words, not what we are constrained to now but where the future of computer graphics will be.

 

Also of note, a thought I should have added above: the process of tessellating can be very (computationally) expensive. There are considerable benefits to removing the extraordinary from an (initial) equation. Or better yet, leveraging those (extraordinary artifacts) to better understand how to make the final product even better.

Link to comment
Share on other sites

  • Admin

I don't know enough about current display technology to even offer a suggestion.

(Which in theory could be beneficial because I don't know what can't be done)

 

 

 

Somewhat related:

I found this to be a nice introduction to the actual process of subdivision:

http://www.rorydriscoll.com/2008/08/01/catmull-clark-subdivision-the-basics/

Link to comment
Share on other sites

I don't know enough about current display technology to even offer a suggestion.

It's not about display technology. You can't "directly" digitise something that's produced by a continuous function of your inputs. There'll always be some kind of sampling.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...