• Welcome to SC4 Devotion Forum Archives.

Using Blender (open source modeling program) for content creation.

Started by eggman121, December 29, 2016, 06:01:10 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

c.p.

Looks like some pretty good progress :thumbsup:

Are you going to be integrating this with gmax, or are you going to attempt to create your own SC4Model files?  (I imagine the UV mapping calculations to be a bit of a nightmare).

rivit

@vortext - also nice progress - you're all starting to make me worry about the next bit.

@c.p.

   The short term goal is to get Renders and LODs out to be able to use gmax. My ultimate goal is to take gmax out of the equation. My main question a the moment is will it be in Blender Python or externally with say .NET

From the maxscripts I have what needs to be done - its essentially this given a Render, and a LOD:

1) Project (transform) the LOD to the view of the render, i.e. look at it from the camera viewpoint then u=across and v=down. That's the texture coords.
2) cull all faces facing the back or completely obscured
3) transform the LOD mesh into SC4 coordinates
3) make a S3D from the resultant mesh and use the u,v  for each vertex. Save the 1 FSH.

The problems arise when the render is bigger than 256x256 then
1) Do 1 as above.
2) First divide the mesh horizontally at the 256 boundaries by planes made from the corners of the 256x256box and the camera point.
3) then for each horizontal slice do the same vertically at the 256x256 boundaries.
4) fix the uv for the vertices so produced
5) Transform the resultant meshes back to SC4 coordinates, cull hidden faces
6) Make one s3d and FSH for each 256x256 part of the render which actually has a mesh.

There's also some ID housekeeping to do.

  Conceptually not ridiculous, but 3d geometry is always a brainscrambler. 


A discovery:
When doing a little work with MattB to verify the difference in sun angles between gmax and 3dsmax he mentioned in passing that the game actually doesn't have correct shadows either. This isn't something I've ever noticed but he's right. Somewhere in the MAXIS machine the viewpoint azimuth (angle from N) and the sun azimuth have been interpreted as the same thing, meaning the in-game shadows are out by 14.5 degrees when compared to the shadows on BATs.

The two images below show the difference - the zip file is a little .dat that fixes the LightingExemplar to carry the correct angles.

mattb325

Great progress again! Excellent work on the engineering department textures, gui and new shadow mod!


Barroco Hispano

When the sun direction is modified, this happens:



Some time ago I spent hours trying to solve it but it was impossible...
Barroco Hispano

mattb325

I guess it is worth mentioning that a batted object is rarely seen in isolation; it will always be surrounded by other objects, and, as shadows are not cast upon bats by other bats, small glitches tend to go unnoticed inside the clutter of a typical scene.

Also it depends on how the bat was made and by which designer. From my observations, it appears that each designer had their own separate version of Max and in addition to each being different from the other, there didn't appear to be any versions that were an exact match for the gmax that the community was given. The rigs weren't consistent, either (you can see this by looking at the California Plaza shadows. All the landmarks were made by one or two Maxis designers; industrial by another; residential another, etc, etc).

Add to that each designer's own inevitable artistic signature and I'm amazed that the Maxis buildings are congruous. I suspect the materials palette covered any glaring inconsistencies: by using Maxis materials I can still get 3dsMax 2017 to appear 'old school' as it were....

Your example probably has an unintentional underground lod.

You can see the same effect in one of the examples (3rd from the bottom) in this:



eggman121

I must say. This is awesome work that is going on here.

This is truly an inspirational and collaborative effort by all that are involved. Kudos to you all  :thumbsup:  &apls

To add some flavor, I use some scripts from the Quake m3d scene to export True 3d Models from Gmax to Blender. I can't help wonder if such scripts can help users get there Models form gmax to Blender for users that have a large repository of models that can be touched up with the rendering power of Blender.

This would also help with workflow for users and content creators that have experience with gmax but not so much with blender. So you could do the modeling in Gmax and Texture in Blender? &idea

Anyway I am keenly following these developments. You all need a pat on the back for the effort shown.

-eggman121

Odainsaker

@rivit:  Even though it arose from a tangential issue spurred by an offhand aside, your shadows fix resolved a glaring bit of incompetence that has irked me for years...



Finally, I can sleep in peace!  Thank you!  Thank you!  Thank you!

Ancient Greek geographers used the lengths and angles of sundial shadows cast in Alexandra and Syene in Egypt to determine the astounding circumference of a spherical Earth.  SimCity 4 astronomers would have looked at the wacky shadow of my column and instead been astounded by the inanity of their world.  No longer will they be deceived by the digital shortcuts underlying the mysterious workings of their universe.

Hehe, actually, I recall that SimFox used to lambast Maxis for this blatant mismatch in its BAT lighting and game shadowing setup.  I also seem to hazily recall that at some point he had mentioned that his sun position in BAT4Max had deviated from the primary "sun" light Maxis setup in Gmax.  With Gmax and even Maxis's original buildings already not matching the game, it may have been decided that longer BAT shadows and different angles of incidence of sunlight against typical walls were more graphically effective than purity to Gmax's and Maxis's fuzzy setup.

vortext

So a very puzzling bug threw me off the past few days. It boiled down to the render not being alligned correctly whenever the camera was (re) initialized in the scene, even though all prerequistes checks for scene context were being met when rendering. Turns out whenever new objects are linked to the scene an additional update method needs to be called as well . . great. .  &sly .. no idea what it does exaclty, but it did ultimatly resolve the issue though it took a while to get there.

Anyway, preview and rendering are working now at least. Though I suspect the orthographic scale is still not correct for all zooms. More specific, I suspect a single orthographic scale needs to be used across all zooms, rather than recalculating it for each zoom based on camera height as it is doing currently.



Bit annoying the render preview window positions itself at the mouse cursor it seems. Would be nice if it could have a fixed position on screen. I also wanted to show the render previews when rendering all zooms & rotations however that's not as easy as it appeared (exact same issue & possible solution here ). So yeah, putting these issues on the backburner for now.

Instead I want to implement the LOD export this weekend, however, I'm having trouble going through the steps manually to start with. Either I'm not using the correct export settings in Blender, or perhaps importing it wrong in gmax. Either way the dimensions always come out wrong, specifically the it always becomes a regular box shape in gmax (i.e. width, depth & height are the same) even if the LOD in Blender is an irregular shape.  %wrd

time flies like a bird
fruit flies like a banana

rivit

Despite apparent silence things have been progressing in the background. Today I would like to introduce a new program called OBJxS3D which is another brick in the wall needed to get from Blender to SimCity4.

This program is designed to accept an OBJ file and produce a S3D file which can then be inserted into a .DAT with Reader. It is designed to swing both ways - its also possible to take an exported S3D file and turn it into an OBJ file. If appropriately numbered then the S3D file and its Textures will be correctly imported and linked in Reader. This is a fundamental step in the BAT4Blender export process, where we export texture mapped LODs. Done this way it will make the gmax step obsolete in the export process. 

This program should eventually also be useful for making network models for the NAM (shoutout to the Eggman), or automata in tools like Blender, MilkShape or even 3dsMax (anything that can edit and export OBJs actually) as no extra work will be required if the OBJs and their materials are correctly named.

The materials lib it uses/produces will propagate the texture references given, and these are/should be named as they are in GoFSH (T-G-I-C0.bmp) which can be used to produce the textures needed for the materials. 

Currently the program is in TestMode i.e. it will produce two files for every OBJ file it can read namely _SaveOBJtoOBJ.obj and _SaveOBJtoS3D.s3d. For every S3Dfile it reads it will produce _SaveS3DtoS3D.s3d and _SaveS3DtoOBJ.obj and _SaveS3DtoOBJ.mtl files.

The point of making these particular test files is that the OBJ files should give the same result as the originals when loaded into a Modelling program, and the S3ds when imported into Reader. Don't try big models - <100 vertices should be more than enough to test with. Round trips OBJ->S3D->OBJ and S3D->OBJ->S3D seem to be working correctly so its time to test more widely. Reader can be used to save decoded S3D and FSH files.

The program can be found here:  https://1drv.ms/u/s!AphvaLJG-tShg4xvy66rkCFJZGnJuA
OBJxS3D.exe is a .NET 2.0 exe so should work on any Windows from XP to Win 10. Just put it in any temp folder.

I would ask that a few people please test this program to see that the following things do work properly as I suspect results may vary depending on the Modelling programs used or the source of the OBJ in the first place.

OBJ files are supposed to use Right Handed XY Z(Up) Coordinates, Simcity uses Left Handed XZ Y(Up) Coordinates - so expect models to be oriented incorrectly if the OBJ file is true to standard.  I don't know Blender well enough to do the Texture tests.

Look out for:

  • It can read the OBJ file - i.e. a valid file doesn't crash. It should ignore things it can't recognise and report Errors and Warnings.
  • X,Y and Z point the right way - ie the model is oriented correctly in Reader. Use models that are oddly shaped so you can tell if they're the right way up. i.e dont use a cube or a sphere.
  • The Model has texture coordinates - if missing in the OBJ (no vt lines) they will be set to 0 and the textures will not map at all.
  • The textures are mapped correctly, they may end up flipped H or V or both
  • That no vertices are dropped i.e. sharp edges stay that way and textures are not interpolated over an edge that should be between two different parts of a texture.
  • That the faces are oriented correctly - Facing Outwards ie Counterclockwise Vertices. You can't see this till you get into SimCity - if they're wrong you won't see the model at all. Reader doesn't cull faces facing inwards so you can't see this error there.
  • That correctly named materials (I_name) and corresponding material maps (map_Kd = T-G-I-C0.bmp) turn up correctly in the Reader, and display properly when they have been loaded into Reader as FSH. Conversely they should also work in the Modelling Program after exporting.
   Please feedback your questions/findings to me with the name of Modelling Program used to produce the OBJ and which of the above 7 points failed and how. If there's a predictable pattern I'll change the program accordingly. Obviously Blender is the main target for compatibility. Milkshape is a useful program too - a lot like gmax.

  I have so far only tested with Milkshape - it uses the same axes as Simcity but flips the textures Horizontally. It takes the OBJ literally and doesn't appear to worry about axis orientation.
   

vortext

Quote from: rivit on March 14, 2019, 06:18:14 PM
Despite apparent silence things have been progressing in the background.

Yes this it true for me as well, suppose it's time for the weekly show and tell.  :D

At any rate, things have been progressing nicely with the addon. Notable additions include;

- fully functional gui
- lod export as .3ds (and export as .obj in the works)
- png written with tgi as filenames
- 'tiled' rendering

And here's how it looks atm.


Options for HD, nighttime, etc will be added as things progress.

With regards to the tiled rendering, instead of slicing the complete rendered image after the fact it turns out Blender actually has the option to render part of the camera view, and save the partial render as an image. Rather handy indeed.


Ignore the filenames here, it was for testing. For context you're looking at a 32x32x32 diagonal cube.

However there's one thing still to be tackled here, which is to figure out if a partial view is empty and disregard it before writing (as writing empty images will exhaust the naming scheme rather quick, as explained in Robin's post here). The most straightforward way I can think of is to check the alpha values of the pixels array, though I suspect some matrix magic might do the trick a lot faster. . but yeah projection matrices really are hard to wrap ones head around.

At any rate, there're few other things on the to-do list still but very close to a first version of a functional addon.  :)
time flies like a bird
fruit flies like a banana

rivit

You've made a lot of progress  &apls.

Well that partial render looks rather handy. I wouldn't be surprised to find Blender knows when its empty by virtue of some variable or method it has because it saves a lot of render time to know when the model is not visible.

  If we export LODs to 3ds for gmax its just a model as gmax does the projection work. But if we go ourselves then we still need to attach texture coordinates to the LOD - that is the time to check - if the LOD model doesn't encroach into the rendered view the calculated texture coordinates will be >1 or <0 because the model wont intersect the render. The LOD needs to in the view projection to produce the texture coordinates, but its still saved in model coordinates rotated accordingly for the OBJxS3D conversion to result in the correct perspective for SC4. 

Do also check that you have the view Rotation enum correctly numbered 1=S, 2=E, 3=N, 4=W which is how it shows in gmax and Bat4Max.  Also rendered textures are always produced and numbered from top-left, across then down. The correct formula for the IID is given below from the Bat Scripts (up to 64 FSH per view rotation)


    ''--------------------------------------------------------------------------------------------------
    '' Function:   TextureOutputIDStr
    '' Param:      texIndex                     
    '' Desc:         
    ''--------------------------------------------------------------------------------------------------
    Private Function TextureOutputIDStr(Optional ByVal texIndex As Integer = 0) As String

      Dim highSwizzle As Integer = 0

      '' awful kludge to be able to handle more than 16 textures per view
      '' sharing the second digit between rot & the high bit of the texture index
      Dim digit2 As Integer = texIndex \ 16
      texIndex = texIndex Mod 16
      If (digit2 > 3) Then
        MsgBox("This building is huge! It's just not going to work!")
      Else
        highSwizzle = digit2 * 4
      End If

      Dim guidStr As String = ModelID() & (zoom - 1).ToString("X1") & (highSwizzle + (rot - 1)).ToString("X1") & texIndex.ToString("X1")

      Return guidStr

    End Function

vortext

Thanks Ron!

Actually had the rotations enums backwards, fixed now as well as going in the correct order for tiled rendering.  :thumbsup:

With regards to the tgi, don't quite understand yet what they're doing, however as tgi gen does work for small scale stuff I returned my attention to the orthographic scale and yeah, speaking of awful kludges. .  ::)

Using the same orthographic scale for all zooms wasn't correct. So I made it take camera height into account again, however, this didn't appear correct either. After some manual fiddling and comparisons in Blender I noticed a pattern, the os returned from the gmax formulae decreased per zoom while it should be increasing. Long story short I ended up with the following, wherein the 'default' is the os for zoom5 as calculated by gmax as well..

final_os = default_os + (default_os-os_gmax)

This gave better results, here's how the blender rendered zooms hold up to the uv mapping.










Zoom 1 & 2 are still rather poor fits compared to the gmax renders, however the routine is using 1 pixel x & y margins for all zooms so there's some wiggle room to improve things. 
time flies like a bird
fruit flies like a banana

mattb325

That's incredible progress, guys  :thumbsup:

The fit for the z5 is perfect...for z1 & 2, from my observations, the buildings are actually standard orthographic, whereas z3-5 are the SC4 camera angle (if that helps any)

rivit

Well all things being equal that's not bad. Z5 spot on, the Z4 is only an offset of 1 up,left  away. The others look to be offset and distorted, and I wouldn't trust the Z0 coordinates as they look wrong in gmax.

To be frank, we won't get this right till we can produce the texture coordinates ourselves to see what the numeric differences between them and gmax are. Once we have that at least we can mathsage it back into shape.

Thinking further about the partial renders - we still will need to chop up the LODs for all the parts. There is a slice by plane function in Blender that we may be able to put to use for this - I suspect it will need some playing with before we know how it works for us. We need to slice by a plane that goes from the view point, through the sides of the partial render to reduce the projected LOD to what SC4 needs - then cull any backfacing faces.

I've done some futher testing with OBJxS3d and it seems Blender picks up the OBJs of exported s3ds quite well. Like Milkshape the mesh comes in correctly (ie faces the right way optically) and only the texture v coords are flipped vertically. So that was surprisingly good. Haven't gone back the other way yet but I don't expect a lot of problems. Will fix the program to take CLI arguments so we can invoke it directly from either a .bat file or from the plugin itself by using the OS Run or Exec.

below the result of importing one of my SC4 automata into and then rendering with Blender. Its pretty cubist but thats what you get with a very low poly model.


vortext

welp seems like last time I forgot to mention I was away for the week . . :-[  . . so not much progress since last update.

At any rate, back at how to deal with empty tiles. Unfortuanlty it seems the python api has no direct way of telling if there's nothing in the camera view. Worse, there's no direct way to access the pixel array of the rendered image, unless using nodes as described here.

So I implemented that, however, it is really slooow .. In fact, for some time I thought there was a bug in the checking procedure because Blender seemed to freeze up, but nope.. it is just a stupid slow process.. Like, it took ~45 minutes to 'render' a 32x32x32 cube. Obviously things can be sped up by not checking every single pixel in the image and check every Nth pixel instead, however that does give rise to the chance an image is discarded as empty while it is not.

So yeah, this is not ideal on both counts and may have to come up with alternatives. I was thinking perhaps raycasting could be of help here, as well come in handy for LOD slicing later on? Any other ideas how to go about tackling this?   &Thk/(
time flies like a bird
fruit flies like a banana

Jasoncw

The sliced images would have a fixed number of dimensions, right?

If they were empty, wouldn't there also be a fixed number of file sizes? For example wouldn't an empty 256x256 image always have the same file size, and a 128x256, and so on?

The only potential variation I can think of in a genuinely empty area would be that the background color would be different for different renders, but it would always only be one color.

rivit

@jasoncw
That's a fair observation - PNGs should compress to a constant (small) size if empty (0) or only one color (r,g,b) throughout for a given image dimension. Probably need to save it first to get it though. Other formats like BMP are a constant size regardless of their contents so filesize isn't helpful in working out if they're empty.

@vortext  I think we needed a break anyway - I have finished OBJxS3D and its usable now. But I haven't done anything else either. All of my experiments with Blender itself left me frustrated so I haven't been near it for a while.

Thinking about the empty image thing there seem to me to be a few possibilities

1) don't worry about empty textures for now - it won't go wrong in SC4 since nothing will refer to them.

2) don't do partial renders but rely on the chopping up part to segment everything. That can work out if its empty by knowing no vertex coordinate falls in that 256x256 segment. This involves raycasting from viewpoint through the vertex coordinates onto the texture to get u,v and slicing the meshes accordingly. It does depend on seeing if we can chop things up efficiently in Blender.  1 view/zoom -> 1 LOD with 1 texture -> n  LOD parts with 1 part texture each.

3) don't bother to try checking for empty inside Blender, as checking for empty textures is trivial and very fast in GoFSH - I already do this for a number of things. If textures need chopping/renumbering on the run then so be it - these are also in GoFSH code already. I haven't yet built in the special processing for Bat4Blender but will be reusing everything I already have and adding things won't be that hard.

4) we might need to do the whole segmenting/chopping thing outside Blender - given OBJs of the LODs 1-5 and PNG renders of View 1-4, Zoom 1-5. Then all of that code goes into an .net exe. Not ideal but it may be way faster.

Favouring (4) If small objects that are restricted to <=256x256 pixels rendered can be produced directly from BAT4Blender to feed through OBJxS3D and GoFSH (PNG->BMP->FSH) then that is a major achievement already. Then we can expand to larger items.

We always knew it was going to get quite hard at some point - and this LOD slicing has always seemed the worst part (not forgetting you've already solved a number of other difficult things) - the longer we can avoid having to code it the more chance we have of understanding it.

fantozzi

Just a "fantozzi thought":

Blender has a really mighty community with many developers and modelling enthusiasts. If you ever get stuck it might be helpfull to present your work so far in a topic somewhere over there (https://www.blender.org/community/). Could be we get even some "blender guys" involved with SC4. Making up a new connection with this immense community might be fruitfull in the future.


vortext

yeah option 4 seems to make the most sense at the moment, and I've disabled tiled rendering for the moment. Also found a neat example of a dialogue popup and implemented it to warn the user the model is too big to render ( figured it will come in handy anyway  :) ).   

And with that a first release of the addon can be downloaded here. I'm curious if it will install and what issues may arise .. *fingers crossed*

Couple of notes;


  • LOD fitting is a manual process, i.e. you need to hit the 'fit' button when changes to model dimension are made.
  • LOD fitting currently only takes mesh objects into consideration, so splines and the like will have to be converted (changing this is on the todo list).
  • LOD export is to the same location as the .blend file, and cannot be renamed (yet) so be carefull as it will silently override previously exported files
  • ditto for the rendered png files, they will be exported to the same folder as the .blend file and silently overwrite previous versions
  • rendering is limited to small models, i.e. texture dimensions <= 256 pixels

As for install procedure;


  • Open Blender and go to File -> User Preferences (screen). This will open a popup window
  • Go to the Add-ons tab, and select 'Install Add-on from File..' at the bottom (screen).
  • Navigate to the downloaded .zip file, select it and hit 'Install Add-on from File' button (screen).
  • Next the add-on needs to be enabled to make use of it. Easiest is to type 'bat' in the search bar and check the box (screen).
  • If all went well the add-on should be available at the end of the scene context menu (screen)


To disable the Add-on simply uncheck the box. To uninstall go to the Add-ons tab in the user preferences, search for the addon and expand the info box with the little triangle, this will reveal the option to remove it (screen)

As for how to procede, I'll have to play around with the .net exe and see if it can be called from Blender perhaps, in addition to some cleanup work code wise. .  ::) Also have a few options to explore regarding the lod slicing ordeal.   
time flies like a bird
fruit flies like a banana

eggman121

Great Work @Vortex

I was able to install the current build however, I jumped the gun early and tried to install from a previous post.

If users have issues they will have to remove previous version. Just a heads up. Blender was getting cranky at me for having two installs.

You have done well so far. I will see If I can get some BATs out as a test.

Thanks again.

-eggman121