Hm, you could certainly make some adjustments to this algorithm as-is (e.g. prioritize black over green/yellow/brown details, use something more sophisticated than pixel-per-pixel equality for image similarity, synthesize a new tile that tries to fill in for the tiles to eliminate...). I was trying to think how this might be improved into a more effective greedy search, but while it seems like it should be possible, I couldn’t come up with anything that wasn’t basically the same as this.
For a genetic algorithm, I would (at least the first time trying this) make each individual a tile set, the fitness function a similarity measure (such as the histogram method described here
) for the closest image you can construct out of that tile set, and the mutation... well, that would be tough part, since it depends on representation. Assuming any pixel can be any color, you can always represent tilesets as a large vector of floating-point numbers and round them to get the color values.
If it sounds like I have no idea how well this would work or how fast it would run, or even if it’s a reasonable choice given what else is out there
, it’s because I don’t
To my knowledge, this has never been done before for exactly the task of constructing an image out of tiles, but it has done a pretty good job on comparable graphics tasks before (e.g. finding wavelet- or fractal-based approximations of images). Maybe I should ask my AI professor what he’d use, but GAs and related topics are kind of his strong point....
This is not super hard, but it’s not trivial, either. The good news is that expected runtime is fairly predictable. Most of the time by far will be spent in the fitness evaluation, and from there you can get a pretty good estimate of how long a run will take (population size * generations * average fitness evaluation time). With a GA it is crucial to tune parameters such as crossover/mutation probability to get as close to the best possible as possible; if you’re feeling frisky, you can always use a GA to find decent parameters for a different GA. Also, the “one-fifth rule” is a decent rule of thumb for autotuning mutation variance.
The main reason I’m not setting this all up for you myself is because I need a bit more experience handling graphics programmatically before attempting this, and I’m not going to get that experience during the last week of classes + finals. I’d be glad to help you out if you’re interested in doing this yourself, though.
I asked said professor, and his hunches were about the same. One thing I didn’t mention that decision trees might be helpful somehow; I couldn’t think of a way to make it possible, though.
One possible adjustment I have thought up (that is a lot simpler!) is that you may wish to experiment with the standard of choosing which tiles to eliminate. Rather than finding the two most similar tiles out of the tile set, you might want to try some other evaluation of how “useful” a given tile is (maybe something involving the tile’s entropy and frequency of occurrence?). Bonus: finding the similar tile to merge it into becomes O(n
Lurching back into the realms of the not-so-sane methods, I don’t know exactly what you do to “merge” tiles, but it might be feasible to graphically merge them on them some level (plain old blending? moving pixels around?) and try reevaluating their placement on the final image.
I have also entertained the notion in my head that it might be fruitful to attempt some sort of greedy search on pixels
in a tile set (akin to the min-conflicts algorithm) that gets re-evaluated at each step instead of on tiles in the tile set (not sure whether you have them stay in fixed positions here, though that would make sense for what you already have). I suspect it would get caught in local optima quite a bit, though.