default search action
Bikramjit Banerjee
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
Journal Articles
- 2024
- [j22]Keyang He, Prashant Doshi, Bikramjit Banerjee:
Modeling and reinforcement learning in partially observable many-agent systems. Auton. Agents Multi Agent Syst. 38(1): 12 (2024) - [j21]Shiron Manandhar, Bikramjit Banerjee:
Reinforcement actor-critic learning as a rehearsal in MicroRTS. Knowl. Eng. Rev. 39 (2024) - 2022
- [j20]Trung Nguyen, Bikramjit Banerjee:
Reinforcement learning as a rehearsal for swarm foraging. Swarm Intell. 16(1): 29-58 (2022) - 2021
- [j19]Saurabh Arora, Prashant Doshi, Bikramjit Banerjee:
I2RL: online inverse reinforcement learning under occlusion. Auton. Agents Multi Agent Syst. 35(1): 4 (2021) - [j18]Roi Ceren, Keyang He, Prashant Doshi, Bikramjit Banerjee:
PALO bounds for reinforcement learning in partially observable stochastic games. Neurocomputing 420: 36-56 (2021) - [j17]Bikramjit Banerjee, Sneha Racharla:
Human-agent transfer from observations. Knowl. Eng. Rev. 36: e2 (2021) - 2019
- [j16]Bikramjit Banerjee, Syamala Vittanala, Matthew Edmund Taylor:
Team learning from human demonstration with coordination confidence. Knowl. Eng. Rev. 34: e12 (2019) - 2017
- [j15]Daniel S. Brown, Jeffrey Hudack, Nathaniel Gemelli, Bikramjit Banerjee:
Exact and Heuristic Algorithms for Risk-Aware Stochastic Physical Search. Comput. Intell. 33(3): 524-553 (2017) - [j14]Tsz-Chiu Au, Bikramjit Banerjee, Prithviraj Dasgupta, Peter Stone:
Multirobot Systems. IEEE Intell. Syst. 32(6): 3-5 (2017) - [j13]Bikramjit Banerjee, Caleb E. Davis:
Multiagent Path Finding With Persistence Conflicts. IEEE Trans. Comput. Intell. AI Games 9(4): 402-409 (2017) - 2016
- [j12]Landon Kraemer, Bikramjit Banerjee:
Multi-agent reinforcement learning as a rehearsal for decentralized planning. Neurocomputing 190: 82-94 (2016) - 2015
- [j11]Bikramjit Banerjee, Jeremy Lyle, Landon Kraemer:
The complexity of multi-agent plan recognition. Auton. Agents Multi Agent Syst. 29(1): 40-72 (2015) - [j10]Bikramjit Banerjee, Landon Kraemer:
Stackelberg Surveillance. Informatica (Slovenia) 39(4) (2015) - 2014
- [j9]Landon Kraemer, Bikramjit Banerjee:
Reinforcement Learning of Informed Initial Policies for Decentralized Planning. ACM Trans. Auton. Adapt. Syst. 9(4): 18:1-18:32 (2014) - 2012
- [j8]Bikramjit Banerjee, Jing Peng:
Strategic best-response learning in multiagent systems. J. Exp. Theor. Artif. Intell. 24(2): 139-160 (2012) - 2011
- [j7]Bikramjit Banerjee, Landon Kraemer:
Action Discovery for Single and Multi-Agent Reinforcement Learning. Adv. Complex Syst. 14(2): 279-305 (2011) - 2010
- [j6]Kyle Walsh, Bikramjit Banerjee:
Fast a* with Iterative Resolution for Navigation. Int. J. Artif. Intell. Tools 19(1): 101-119 (2010) - 2009
- [j5]Bikramjit Banerjee, Ahmed Abukmail, Landon Kraemer:
Layered Intelligence for Agent-based Crowd Simulation. Simul. 85(10): 621-633 (2009) - 2007
- [j4]Bikramjit Banerjee, Jing Peng:
Generalized multiagent learning with performance bound. Auton. Agents Multi Agent Syst. 15(3): 281-312 (2007) - 2006
- [j3]Bikramjit Banerjee, Jing Peng:
Reactivity and Safe Learning in Multi-Agent Systems. Adapt. Behav. 14(4): 339-356 (2006) - 2004
- [j2]Bikramjit Banerjee, Sandip Sen, Jing Peng:
On-policy concurrent reinforcement learning. J. Exp. Theor. Artif. Intell. 16(4): 245-260 (2004) - 2000
- [j1]Bikramjit Banerjee, Anish Biswas, Manisha Mundhe, Sandip Debnath, Sandip Sen:
Using Bayesian Networks to Model Agent Relationships. Appl. Artif. Intell. 14(9): 867-879 (2000)
Conference and Workshop Papers
- 2024
- [c42]Keyang He, Prashant Doshi, Bikramjit Banerjee:
Robust Individualistic Learning in Many-Agent Systems. PRIMA 2024: 290-305 - 2022
- [c41]Saurabh Arora, Prashant Doshi, Bikramjit Banerjee:
Online Inverse Reinforcement Learning with Learned Observation Model. CoRL 2022: 1468-1477 - [c40]Keyang He, Prashant Doshi, Bikramjit Banerjee:
Reinforcement learning in many-agent settings under partial observability. UAI 2022: 780-789 - 2021
- [c39]Keyang He, Bikramjit Banerjee, Prashant Doshi:
Cooperative-Competitive Reinforcement Learning with History-Dependent Rewards. AAMAS 2021: 602-610 - [c38]Saurabh Arora, Prashant Doshi, Bikramjit Banerjee:
Min-Max Entropy Inverse RL of Multiple Tasks. ICRA 2021: 12639-12645 - 2019
- [c37]Vinamra Jain, Prashant Doshi, Bikramjit Banerjee:
Model-Free IRL Using Maximum Likelihood Estimation. AAAI 2019: 3951-3958 - [c36]Saurabh Arora, Prashant Doshi, Bikramjit Banerjee:
Online Inverse Reinforcement Learning Under Occlusion. AAMAS 2019: 1170-1178 - 2018
- [c35]Bikramjit Banerjee:
Autonomous Acquisition of Behavior Trees for Robot Control. IROS 2018: 3460-3467 - 2016
- [c34]Bikramjit Banerjee, Steven Loscalzo, Daniel Lucas Thompson:
Detection of Plan Deviation in Multi-Agent Systems. AAAI 2016: 2445-2451 - [c33]Roi Ceren, Prashant Doshi, Bikramjit Banerjee:
Reinforcement Learning in Partially Observable Multiagent Settings: Monte Carlo Exploring Policies with PAC Bounds. AAMAS 2016: 530-538 - 2014
- [c32]Todd W. Neller, Laura E. Brown, Roger L. West, James E. Heliotis, Sean Strout, Ivona Bezáková, Bikramjit Banerjee, Daniel Lucas Thompson:
Model AI Assignments 2014. AAAI 2014: 3054-3056 - 2013
- [c31]Bikramjit Banerjee:
Pruning for Monte Carlo Distributed Reinforcement Learning in Decentralized POMDPs. AAAI 2013: 88-94 - [c30]Landon Kraemer, Bikramjit Banerjee:
Concurrent reinforcement learning as a rehearsal for decentralized planning under uncertainty. AAMAS 2013: 1291-1292 - 2012
- [c29]Bikramjit Banerjee, Jeremy Lyle, Landon Kraemer, Rajesh Yellamraju:
Sample Bounded Distributed Reinforcement Learning for Decentralized POMDPs. AAAI 2012: 1256-1262 - [c28]Landon Kraemer, Bikramjit Banerjee:
Informed Initial Policies for Learning in Dec-POMDPs. AAAI 2012: 2433-2434 - [c27]Bikramjit Banerjee, Jeremy Lyle, Landon Kraemer:
Efficient context free parsing of multi-agent activities for team and plan recognition. AAMAS 2012: 1441-1442 - 2011
- [c26]Bikramjit Banerjee, Landon Kraemer:
Branch and Price for Multi-Agent Plan Recognition. AAAI 2011: 601-607 - [c25]Prithviraj Dasgupta, Ke Cheng, Bikramjit Banerjee:
Adaptive Multi-robot Team Reconfiguration Using a Policy-Reuse Reinforcement Learning Approach. AAMAS Workshops 2011: 330-345 - 2010
- [c24]Bikramjit Banerjee, Landon Kraemer:
Search Performance of Multi-Agent Plan Recognition in a General Model. Plan, Activity, and Intent Recognition 2010 - [c23]Bikramjit Banerjee, Landon Kraemer, Jeremy Lyle:
Multi-Agent Plan Recognition: Formalization and Algorithms. AAAI 2010: 1059-1064 - [c22]Bikramjit Banerjee, Landon Kraemer:
Evaluation and Comparison of Multi-agent Based Crowd Simulation Systems. AGS 2010: 53-66 - [c21]Bikramjit Banerjee, Landon Kraemer:
Coalition structure generation in multi-agent systems with mixed externalities. AAMAS 2010: 175-182 - [c20]Bikramjit Banerjee, Landon Kraemer:
Validation of agent based crowd egress simulation. AAMAS 2010: 1551-1552 - [c19]Bikramjit Banerjee, Landon Kraemer:
Action discovery for reinforcement learning. AAMAS 2010: 1585-1586 - 2008
- [c18]Bikramjit Banerjee, Matthew Bennett, Mike Johnson, Adel Ali:
Congestion Avoidance in Multi-Agent-based Egress Simulation. IC-AI 2008: 151-157 - [c17]Bikramjit Banerjee, Ahmed Abukmail, Landon Kraemer:
Advancing the Layered Approach to Agent-Based Crowd Simulation. PADS 2008: 185-192 - 2007
- [c16]Bikramjit Banerjee, Peter Stone:
General Game Learning Using Knowledge Transfer. IJCAI 2007: 672-677 - 2006
- [c15]Bikramjit Banerjee, Jing Peng:
RVsigma(t): a unifying approach to performance and convergence in online multiagent learning. AAMAS 2006: 798-800 - 2005
- [c14]Bikramjit Banerjee, Jing Peng:
Efficient No-Regret Multiagent Learning. AAAI 2005: 41-46 - [c13]Bikramjit Banerjee, Jing Peng:
Efficient learning of multi-step best response. AAMAS 2005: 60-66 - [c12]Bikramjit Banerjee:
On the performance of on-line concurrent reinforcement learners. AAMAS 2005: 1371 - [c11]Bikramjit Banerjee, Jing Peng:
Unifying Convergence and No-Regret in Multiagent Learning. LAMAS 2005: 100-114 - 2004
- [c10]Bikramjit Banerjee, Jing Peng:
Performance Bounded Reinforcement Learning in Strategic Interactions. AAAI 2004: 2-7 - [c9]Bikramjit Banerjee, Jing Peng:
The Role of Reactivity in Multiagent Learning. AAMAS 2004: 538-545 - 2003
- [c8]Bikramjit Banerjee, Jing Peng:
Adaptive policy gradient in multiagent learning. AAMAS 2003: 686-692 - 2002
- [c7]Bikramjit Banerjee, Jing Peng:
Convergent Gradient Ascent in General-Sum Games. ECML 2002: 1-9 - [c6]Jing Peng, Bikramjit Banerjee, Douglas R. Heisterkamp:
Kernel Index for Relevance feedback Retrieval. FSKD 2002: 187-191 - 2001
- [c5]Bikramjit Banerjee, Sandip Sen, Jing Peng:
Fast Concurrent Reinforcement Learners. IJCAI 2001: 825-832 - [c4]Doug Warner, J. Neal Richter, Stephen D. Durbin, Bikramjit Banerjee:
Mining user session data to facilitate user interaction with a customer service knowledge base in RightNow Web. KDD 2001: 467-472 - 2000
- [c3]Rajatish Mukherjee, Bikramjit Banerjee, Sandip Sen:
Learning Mutual Trust. Trust in Cyber-societies 2000: 145-158 - [c2]Bikramjit Banerjee, Sandip Sen:
Selecting partners. Agents 2000: 261-262 - [c1]Bikramjit Banerjee, Sandip Debnath, Sandip Sen:
Combining Multiple Perspectives. ICML 2000: 33-40
Informal and Other Publications
- 2024
- [i6]Khadichabonu Valieva, Bikramjit Banerjee:
Quasimetric Value Functions with Dense Rewards. CoRR abs/2409.08724 (2024) - 2023
- [i5]Keyang He, Prashant Doshi, Bikramjit Banerjee:
Latent Interactive A2C for Improved RL in Open Many-Agent Systems. CoRR abs/2305.05159 (2023) - 2021
- [i4]Keyang He, Prashant Doshi, Bikramjit Banerjee:
Many Agent Reinforcement Learning Under Partial Observability. CoRR abs/2106.09825 (2021) - 2020
- [i3]Saurabh Arora, Bikramjit Banerjee, Prashant Doshi:
Maximum Entropy Multi-Task Inverse RL. CoRR abs/2004.12873 (2020) - [i2]Keyang He, Bikramjit Banerjee, Prashant Doshi:
Cooperative-Competitive Reinforcement Learning with History-Dependent Rewards. CoRR abs/2010.08030 (2020) - 2018
- [i1]Saurabh Arora, Prashant Doshi, Bikramjit Banerjee:
A Framework and Method for Online Inverse Reinforcement Learning. CoRR abs/1805.07871 (2018)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-03 21:21 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint