


default search action
CIG 2017: New York, NY, USA
- IEEE Conference on Computational Intelligence and Games, CIG 2017, New York, NY, USA, August 22-25, 2017. IEEE 2017, ISBN 978-1-5386-3233-8
- Samuel Alvernaz, Julian Togelius
:
Autoencoder-augmented neuroevolution for visual doom playing. 1-8 - Daan Apeldoorn, Vanessa Volz:
Measuring strategic depth in games using hierarchical knowledge bases. 9-16 - Daniel A. Ashlock, Diego Pérez-Liébana, Amanda Saunders:
General video game playing escapes the no free lunch theorem. 17-24 - Alexander Baldwin, Steve Dahlskog, José M. Font, Johan Holmberg
:
Mixed-initiative procedural generation of dungeons using game design patterns. 25-32 - Paul Bertens, Anna Guitart, Africa Perianez:
Games and big data: A scalable multi-dimensional churn prediction model. 33-36 - Joseph Alexander Brown
, Daniel A. Ashlock:
Using multiple worlds for multiple agent roles in games. 37-44 - Andrew Burns, James R. Tulip:
Detecting flow in games using facial expressions. 45-52 - Simon Demediuk, Marco Tamassia, William L. Raffe
, Fabio Zambetta
, Xiaodong Li
, Florian 'Floyd' Mueller
:
Monte Carlo tree search based algorithms for dynamic difficulty adjustment. 53-59 - Alexander Dockhorn
, Rudolf Kruse:
Combining cooperative and adversarial coevolution in the context of pac-man. 60-67 - Markus Eger
, Chris Martens
, Marcela Alfaro Cordoba
:
An intentional AI for hanabi. 68-75 - José María Font, Daniel Manrique
, Sergio Larrodera, Pablo Ramos-Criado:
Towards a hybrid neural and evolutionary heuristic approach for playing tile-matching puzzle games. 76-79 - Yannick Francillette, Abdelkader Gouaïch, Lylia Abrouk:
Adaptive gameplay for mobile gaming. 80-87 - Raluca D. Gaina, Simon M. Lucas, Diego Perez Liebana:
Rolling horizon evolution enhancements in general video game playing. 88-95 - Georgios Goudelis, Georgios Tsatiris, Kostas Karpouzis, Stefanos D. Kollias:
3D cylindrical trace transform based feature extraction for effective human action classification. 96-103 - Garrison W. Greenwood:
A fuzzy system approach for choosing public goods game strategies. 104-109 - Gina Grossi, Brian Ross:
Evolved communication strategies and emergent behaviour of multi-agents in pursuit domains. 110-117 - Cristina Guerrero-Romero, Annie Louis, Diego Perez Liebana:
Beyond playing to win: Diversifying heuristics for GVGAI. 118-125 - Manuel Guimarães, Pedro Santos
, Arnav Jhala
:
CiF-CK: An architecture for social NPCS in commercial games. 126-133 - Lewis Horsley, Diego Perez Liebana:
Building an automatic sprite generator with deep convolutional generative adversarial networks. 134-141 - Aaron Isaksen, Drew Wallace, Adam Finkelstein, Andy Nealen:
Simulating strategy and dexterity for puzzle games. 142-149 - JiHoon Jeon, DuMim Yoon, Seong-Il Yang, Kyung-Joong Kim
:
Extracting gamers' cognitive psychological features and improving performance of churn prediction from mobile games. 150-153 - Yuxuan Jiang, Tomohiro Harada
, Ruck Thawonmas:
Procedural generation of angry birds fun levels using pattern-struct and preset-model. 154-161 - Niels Justesen, Sebastian Risi:
Learning macromanagement in starcraft from replays using deep learning. 162-169 - Ahmed Khalifa
, Michael Cerny Green, Diego Perez Liebana, Julian Togelius
:
General video game rule generation. 170-177 - Man-Je Kim
, Kyung-Joong Kim
:
Opponent modeling based on action table for MCTS-based fighting game AI. 178-180 - Bartosz Kostka, Jaroslaw Kwiecieli, Jakub Kowalski
, Pawel Rychlikowski
:
Text-based adventures of the golovin AI agent. 181-188 - Sang-Kwang Lee, Seong-Il Yang:
Optimizing game live service for mobile free-to-play games. 189-190 - Scott Lee, Julian Togelius
:
Showdown AI competition. 191-198 - Daniele Loiacono
, Luca Arnaboldi:
Fight or flight: Evolving maps for cube 2 to foster a fleeing behavior. 199-206 - Luong Huu Phuc, Kanazawa Naoto, Kokolo Ikeda:
Learning human-like behaviors using neuroevolution with statistical penalties. 207-214 - Luís Fernando Maia Silva, Windson Viana
, Fernando A. M. Trinta:
Using Monte Carlo tree search and google maps to improve game balancing in location-based games. 215-222 - Byeong-Jun Min, Kyung-Joong Kim
:
Learning to play visual doom using model-free episodic control. 223-225 - Chanh Nguyen, Noah Reifsnyder, Sriram Gopalakrishnan, Héctor Muñoz-Avila:
Automated learning of hierarchical task networks for controlling minecraft agents. 226-231 - Hiroya Oonishi, Hitoshi Iima:
Improving generalization ability in a puzzle game using reinforcement learning. 232-239 - Joseph C. Osborn, Adam Summerville, Michael Mateas:
Automated game design learning. 240-247 - Diego Perez Liebana, Matthew Stephenson
, Raluca D. Gaina, Jochen Renz
, Simon M. Lucas:
Introducing real world physics and macro-actions to general video game ai. 248-255 - Andreas Precht Poulsen, Mark Thorhauge, Mikkel Hvilshj Funch, Sebastian Risi:
DLNE: A hybridization of deep learning and neuroevolution for visual control. 256-263 - Martin L. M. Rooijackers, Mark H. M. Winands
:
Resource-gathering algorithms in the game of starcraft. 264-271 - Andre Santos, Pedro Alexandre Santos
, Francisco S. Melo:
Monte Carlo tree search experiments in hearthstone. 272-279 - Sam Snodgrass, Santiago Ontañón:
Procedural level generation using multi-layer level representations with MdMCs. 280-287 - Matthew Stephenson
, Jochen Renz
:
Generating varied, stable and solvable levels for angry birds style physics games. 288-295 - Alberto Uriarte, Santiago Ontañón:
Single believe state generation for partially observable real-time strategy games. 296-303 - Olivier Van Acker, Oded Lachish
, Graeme Burnett:
Cellular automata simulation on FPGA for training neural networks with virtual world imagery. 304-305 - Seonghun Yoon, Kyung-Joong Kim
:
Deep Q networks for visual fighting game AI. 306-308 - Shuyi Zhang, Michael Buro:
Improving hearthstone AI by learning high-level rollout policies and bucketing chance node events. 309-316 - Ercument Ilhan, A. Sima Etaner-Uyar:
Monte Carlo tree search with temporal-difference learning for general video game playing. 317-324

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.