Package | Description |
---|---|
org.antlr.analysis | |
org.antlr.codegen | |
org.antlr.grammar.v3 | |
org.antlr.gunit | |
org.antlr.runtime | |
org.antlr.runtime.debug | |
org.antlr.runtime.tree | |
org.antlr.tool |
Modifier and Type | Field | Description |
---|---|---|
protected Map<DFAState,Map<Integer,Set<Token>>> |
DecisionProbe.stateToIncompletelyCoveredAltsMap |
Tracks alts insufficiently covered.
|
Modifier and Type | Method | Description |
---|---|---|
List<Token> |
MachineProbe.getGrammarLocationsForInputSequence(List<Set<NFAState>> nfaStates,
List<IntSet> labels) |
Given an alternative associated with a DFA state, return the list of
tokens (from grammar) associated with path through NFA following the
labels sequence.
|
Map<Integer,Set<Token>> |
DecisionProbe.getIncompletelyCoveredAlts(DFAState d) |
Return a list of alts whose predicate context was insufficient to
resolve a nondeterminism for state d.
|
Modifier and Type | Method | Description |
---|---|---|
void |
DecisionProbe.reportIncompletelyCoveredAlts(DFAState d,
Map<Integer,Set<Token>> altToLocationsReachableWithoutPredicate) |
Modifier and Type | Method | Description |
---|---|---|
void |
CodeGenerator.issueInvalidAttributeError(String x,
String y,
Rule enclosingRule,
Token actionToken,
int outerAltNum) |
|
void |
CodeGenerator.issueInvalidAttributeError(String x,
Rule enclosingRule,
Token actionToken,
int outerAltNum) |
|
void |
CodeGenerator.issueInvalidScopeError(String x,
String y,
Rule enclosingRule,
Token actionToken,
int outerAltNum) |
|
List<Object> |
Python3Target.postProcessAction(List<Object> chunks,
Token actionToken) |
|
List<Object> |
PythonTarget.postProcessAction(List<Object> chunks,
Token actionToken) |
|
List<Object> |
Target.postProcessAction(List<Object> chunks,
Token actionToken) |
Give target a chance to do some postprocessing on actions.
|
org.stringtemplate.v4.ST |
CodeGenerator.translateTemplateConstructor(String ruleName,
int outerAltNum,
Token actionToken,
String templateActionText) |
Given a template constructor action like %foo(a={...}) in
an action, translate it to the appropriate template constructor
from the templateLib.
|
Modifier and Type | Method | Description |
---|---|---|
Token |
ActionAnalysis.nextToken() |
|
Token |
ActionTranslator.nextToken() |
|
Token |
ANTLRLexer.nextToken() |
Constructor | Description |
---|---|
ActionTranslator(CodeGenerator generator,
String ruleName,
Token actionToken,
int outerAltNum) |
Modifier and Type | Method | Description |
---|---|---|
Token |
gUnitParser.output() |
Constructor | Description |
---|---|
OutputTest(Token token) |
|
ReturnTest(Token retval) |
Modifier and Type | Class | Description |
---|---|---|
class |
ClassicToken |
A Token object like we'd use in ANTLR 2.x; has an actual string created
and associated with this object.
|
class |
CommonToken |
Modifier and Type | Field | Description |
---|---|---|
static Token |
Token.EOF_TOKEN |
|
static Token |
Token.INVALID_TOKEN |
|
static Token |
Token.SKIP_TOKEN |
In an action, a lexer rule can set token to this SKIP_TOKEN and ANTLR
will avoid creating a token for this symbol and try to fetch another.
|
Token |
ParserRuleReturnScope.start |
|
Token |
ParserRuleReturnScope.stop |
|
Token |
RecognitionException.token |
The current Token when an error occurred.
|
Token |
RecognizerSharedState.token |
The goal of all lexer rules/methods is to create a token object.
|
Modifier and Type | Field | Description |
---|---|---|
protected List<Token> |
BufferedTokenStream.tokens |
Record every single token pulled from the source so we can reproduce
chunks of it later.
|
protected List<Token> |
LegacyCommonTokenStream.tokens |
Record every single token pulled from the source so we can reproduce
chunks of it later.
|
Modifier and Type | Method | Description |
---|---|---|
Token |
Lexer.emit() |
The standard method called to automatically emit a token at the
outermost lexical rule.
|
Token |
BufferedTokenStream.get(int i) |
|
Token |
LegacyCommonTokenStream.get(int i) |
Return absolute token i; ignore which channel the tokens are on;
that is, count all tokens not just on-channel tokens.
|
Token |
TokenStream.get(int i) |
Get a token at an absolute index i; 0..n-1.
|
Token |
UnbufferedTokenStream.get(int i) |
|
Token |
Lexer.getEOFToken() |
Returns the EOF token (default), if you need
to return a custom token instead override this method.
|
Token |
UnwantedTokenException.getUnexpectedToken() |
|
protected Token |
BufferedTokenStream.LB(int k) |
|
protected Token |
CommonTokenStream.LB(int k) |
|
protected Token |
LegacyCommonTokenStream.LB(int k) |
Look backwards k tokens on-channel tokens
|
Token |
BufferedTokenStream.LT(int k) |
|
Token |
CommonTokenStream.LT(int k) |
|
Token |
LegacyCommonTokenStream.LT(int k) |
Get the ith token from the current position 1..n where k=1 is the
first symbol of lookahead.
|
Token |
TokenStream.LT(int k) |
Get Token at current input pointer + i ahead where i=1 is next Token.
|
Token |
UnbufferedTokenStream.nextElement() |
|
Token |
Lexer.nextToken() |
Return a token from this source; i.e., match a token on the char
stream.
|
Token |
TokenSource.nextToken() |
Return a Token object from your input stream (usually a CharStream).
|
Modifier and Type | Method | Description |
---|---|---|
List<? extends Token> |
BufferedTokenStream.get(int start,
int stop) |
Get all tokens from start..stop inclusively
|
List<? extends Token> |
LegacyCommonTokenStream.get(int start,
int stop) |
Get all tokens from start..stop inclusively
|
List<? extends Token> |
BufferedTokenStream.getTokens() |
|
List<? extends Token> |
BufferedTokenStream.getTokens(int start,
int stop) |
|
List<? extends Token> |
BufferedTokenStream.getTokens(int start,
int stop,
int ttype) |
|
List<? extends Token> |
BufferedTokenStream.getTokens(int start,
int stop,
List<Integer> types) |
|
List<? extends Token> |
BufferedTokenStream.getTokens(int start,
int stop,
BitSet types) |
Given a start and stop index, return a List of all tokens in
the token type BitSet.
|
List<? extends Token> |
LegacyCommonTokenStream.getTokens() |
|
List<? extends Token> |
LegacyCommonTokenStream.getTokens(int start,
int stop) |
|
List<? extends Token> |
LegacyCommonTokenStream.getTokens(int start,
int stop,
int ttype) |
|
List<? extends Token> |
LegacyCommonTokenStream.getTokens(int start,
int stop,
List<Integer> types) |
|
List<? extends Token> |
LegacyCommonTokenStream.getTokens(int start,
int stop,
BitSet types) |
Given a start and stop index, return a List of all tokens in
the token type BitSet.
|
Modifier and Type | Method | Description |
---|---|---|
void |
TokenRewriteStream.delete(String programName,
Token from,
Token to) |
|
void |
TokenRewriteStream.delete(Token indexT) |
|
void |
TokenRewriteStream.delete(Token from,
Token to) |
|
void |
Lexer.emit(Token token) |
Currently does not support multiple emits per nextToken invocation
for efficiency reasons.
|
String |
BaseRecognizer.getTokenErrorDisplay(Token t) |
How should a token be displayed in an error message? The default
is to display just the text, but during development you might
want to have a lot of information spit out.
|
void |
TokenRewriteStream.insertAfter(String programName,
Token t,
Object text) |
|
void |
TokenRewriteStream.insertAfter(Token t,
Object text) |
|
void |
TokenRewriteStream.insertBefore(String programName,
Token t,
Object text) |
|
void |
TokenRewriteStream.insertBefore(Token t,
Object text) |
|
boolean |
UnbufferedTokenStream.isEOF(Token o) |
|
void |
TokenRewriteStream.replace(String programName,
Token from,
Token to,
Object text) |
|
void |
TokenRewriteStream.replace(Token indexT,
Object text) |
|
void |
TokenRewriteStream.replace(Token from,
Token to,
Object text) |
|
String |
BufferedTokenStream.toString(Token start,
Token stop) |
|
String |
LegacyCommonTokenStream.toString(Token start,
Token stop) |
|
String |
TokenStream.toString(Token start,
Token stop) |
Because the user is not required to use a token with an index stored
in it, we must provide a means for two token objects themselves to
indicate the start/end location.
|
String |
UnbufferedTokenStream.toString(Token start,
Token stop) |
Modifier and Type | Method | Description |
---|---|---|
List<String> |
BaseRecognizer.toStrings(List<? extends Token> tokens) |
A convenience method for use most often with template rewrites.
|
Constructor | Description |
---|---|
ClassicToken(Token oldToken) |
|
CommonToken(Token oldToken) |
Modifier and Type | Class | Description |
---|---|---|
static class |
RemoteDebugEventSocketListener.ProxyToken |
Modifier and Type | Field | Description |
---|---|---|
protected Token |
Profiler.lastRealTokenTouchedInDecision |
Modifier and Type | Method | Description |
---|---|---|
Token |
DebugTokenStream.get(int i) |
|
Token |
DebugTreeAdaptor.getToken(Object t) |
|
Token |
DebugTokenStream.LT(int i) |
Modifier and Type | Method | Description |
---|---|---|
void |
DebugTreeAdaptor.addChild(Object t,
Token child) |
|
Object |
DebugTreeAdaptor.becomeRoot(Token newRoot,
Object oldRoot) |
|
void |
BlankDebugEventListener.consumeHiddenToken(Token token) |
|
void |
DebugEventHub.consumeHiddenToken(Token token) |
|
void |
DebugEventListener.consumeHiddenToken(Token t) |
An off-channel input token was consumed.
|
void |
DebugEventRepeater.consumeHiddenToken(Token token) |
|
void |
DebugEventSocketProxy.consumeHiddenToken(Token t) |
|
void |
ParseTreeBuilder.consumeHiddenToken(Token token) |
|
void |
Profiler.consumeHiddenToken(Token token) |
|
void |
BlankDebugEventListener.consumeToken(Token token) |
|
void |
DebugEventHub.consumeToken(Token token) |
|
void |
DebugEventListener.consumeToken(Token t) |
An input token was consumed; matched by any kind of element.
|
void |
DebugEventRepeater.consumeToken(Token token) |
|
void |
DebugEventSocketProxy.consumeToken(Token t) |
|
void |
ParseTreeBuilder.consumeToken(Token token) |
|
void |
Profiler.consumeToken(Token token) |
|
Object |
DebugTreeAdaptor.create(int tokenType,
Token fromToken) |
|
Object |
DebugTreeAdaptor.create(int tokenType,
Token fromToken,
String text) |
|
Object |
DebugTreeAdaptor.create(Token payload) |
|
void |
BlankDebugEventListener.createNode(Object node,
Token token) |
|
void |
DebugEventHub.createNode(Object node,
Token token) |
|
void |
DebugEventListener.createNode(Object node,
Token token) |
Announce a new node built from an existing token.
|
void |
DebugEventRepeater.createNode(Object node,
Token token) |
|
void |
DebugEventSocketProxy.createNode(Object node,
Token token) |
|
void |
TraceDebugEventListener.createNode(Object node,
Token token) |
|
Object |
DebugTreeAdaptor.errorNode(TokenStream input,
Token start,
Token stop,
RecognitionException e) |
|
void |
BlankDebugEventListener.LT(int i,
Token t) |
|
void |
DebugEventHub.LT(int index,
Token t) |
|
void |
DebugEventListener.LT(int i,
Token t) |
Somebody (anybody) looked ahead.
|
void |
DebugEventRepeater.LT(int i,
Token t) |
|
void |
DebugEventSocketProxy.LT(int i,
Token t) |
|
void |
Profiler.LT(int i,
Token t) |
Track refs to lookahead if in a fixed/nonfixed decision.
|
protected String |
DebugEventSocketProxy.serializeToken(Token t) |
|
void |
DebugTreeAdaptor.setTokenBoundaries(Object t,
Token startToken,
Token stopToken) |
|
String |
DebugTokenStream.toString(Token start,
Token stop) |
Modifier and Type | Field | Description |
---|---|---|
Token |
CommonErrorNode.start |
|
Token |
CommonErrorNode.stop |
|
Token |
CommonTree.token |
A single token is the payload
|
Modifier and Type | Field | Description |
---|---|---|
List<Token> |
ParseTree.hiddenTokens |
Modifier and Type | Method | Description |
---|---|---|
abstract Token |
BaseTreeAdaptor.createToken(int tokenType,
String text) |
Tell me how to create a token for use with imaginary token nodes.
|
abstract Token |
BaseTreeAdaptor.createToken(Token fromToken) |
Tell me how to create a token for use with imaginary token nodes.
|
Token |
CommonTreeAdaptor.createToken(int tokenType,
String text) |
Tell me how to create a token for use with imaginary token nodes.
|
Token |
CommonTreeAdaptor.createToken(Token fromToken) |
Tell me how to create a token for use with imaginary token nodes.
|
Token |
CommonTree.getToken() |
|
Token |
CommonTreeAdaptor.getToken(Object t) |
What is the Token associated with this node? If
you are not using CommonTree, then you must
override this in your own adaptor.
|
Token |
TreeAdaptor.getToken(Object t) |
Return the token object from which this node was created.
|
Token |
RewriteRuleTokenStream.nextToken() |
Modifier and Type | Method | Description |
---|---|---|
Object |
BaseTreeAdaptor.becomeRoot(Token newRoot,
Object oldRoot) |
|
Object |
TreeAdaptor.becomeRoot(Token newRoot,
Object oldRoot) |
Create a node for newRoot make it the root of oldRoot.
|
Object |
BaseTreeAdaptor.create(int tokenType,
Token fromToken) |
|
Object |
BaseTreeAdaptor.create(int tokenType,
Token fromToken,
String text) |
|
Object |
CommonTreeAdaptor.create(Token payload) |
|
Object |
TreeAdaptor.create(int tokenType,
Token fromToken) |
Create a new node derived from a token, with a new token type.
|
Object |
TreeAdaptor.create(int tokenType,
Token fromToken,
String text) |
Same as create(tokenType,fromToken) except set the text too.
|
Object |
TreeAdaptor.create(Token payload) |
Create a tree node from Token object; for CommonTree type trees,
then the token just becomes the payload.
|
Object |
TreeWizard.TreePatternTreeAdaptor.create(Token payload) |
|
abstract Token |
BaseTreeAdaptor.createToken(Token fromToken) |
Tell me how to create a token for use with imaginary token nodes.
|
Token |
CommonTreeAdaptor.createToken(Token fromToken) |
Tell me how to create a token for use with imaginary token nodes.
|
Object |
BaseTreeAdaptor.errorNode(TokenStream input,
Token start,
Token stop,
RecognitionException e) |
create tree node that holds the start and stop tokens associated
with an error.
|
Object |
TreeAdaptor.errorNode(TokenStream input,
Token start,
Token stop,
RecognitionException e) |
Return a tree node representing an error.
|
void |
CommonTreeAdaptor.setTokenBoundaries(Object t,
Token startToken,
Token stopToken) |
Track start/stop token for subtree root created for a rule.
|
void |
TreeAdaptor.setTokenBoundaries(Object t,
Token startToken,
Token stopToken) |
Where are the bounds in the input token stream for this node and
all children? Each rule that creates AST nodes will call this
method right before returning.
|
Constructor | Description |
---|---|
CommonErrorNode(TokenStream input,
Token start,
Token stop,
RecognitionException e) |
|
CommonTree(Token t) |
|
TreePattern(Token payload) |
|
WildcardTreePattern(Token payload) |
Modifier and Type | Field | Description |
---|---|---|
Token |
AttributeScope.derivedFromToken |
This scope is associated with which input token (for error handling)?
|
Token |
Grammar.LabelElementPair.label |
|
Token |
GrammarSemanticsMessage.offendingToken |
Most of the time, we'll have a token such as an undefined rule ref
and so this will be set.
|
Token |
GrammarSyntaxMessage.offendingToken |
Most of the time, we'll have a token and so this will be set.
|
Modifier and Type | Field | Description |
---|---|---|
Map<Integer,Set<Token>> |
GrammarInsufficientPredicatesMessage.altToLocations |
|
protected Set<Token> |
Grammar.tokenIDRefs |
The unique set of all token ID references in any rule
|
Modifier and Type | Method | Description |
---|---|---|
Token |
Interpreter.nextToken() |
Modifier and Type | Method | Description |
---|---|---|
protected void |
NameSpaceChecker.checkForLabelConflict(Rule r,
Token label) |
Make sure a label doesn't conflict with another symbol.
|
boolean |
NameSpaceChecker.checkForLabelTypeMismatch(Rule r,
Token label,
int type) |
If type of previous label differs from new label's type, that's an error.
|
AttributeScope |
Grammar.createParameterScope(String ruleName,
Token argAction) |
|
AttributeScope |
Grammar.createReturnScope(String ruleName,
Token retAction) |
|
AttributeScope |
Grammar.createRuleScope(String ruleName,
Token scopeAction) |
|
AttributeScope |
Grammar.defineGlobalScope(String name,
Token scopeAction) |
|
protected void |
Grammar.defineLabel(Rule r,
Token label,
GrammarAST element,
int type) |
Define a label defined in a rule r; check the validity then ask the
Rule object to actually define it.
|
void |
Rule.defineLabel(Token label,
GrammarAST elementRef,
int type) |
|
void |
Grammar.defineLexerRuleFoundInParser(Token ruleToken,
GrammarAST ruleAST) |
|
void |
Grammar.defineRule(Token ruleToken,
String modifier,
Map<String,Object> options,
GrammarAST tree,
GrammarAST argActionAST,
int numAlts) |
Define a new rule.
|
void |
Grammar.defineRuleListLabel(String ruleName,
Token label,
GrammarAST element) |
|
void |
Grammar.defineRuleRefLabel(String ruleName,
Token label,
GrammarAST ruleRef) |
|
void |
Grammar.defineTokenListLabel(String ruleName,
Token label,
GrammarAST element) |
|
void |
Grammar.defineTokenRefLabel(String ruleName,
Token label,
GrammarAST tokenRef) |
|
void |
Grammar.defineWildcardTreeLabel(String ruleName,
Token label,
GrammarAST tokenRef) |
|
void |
Grammar.defineWildcardTreeListLabel(String ruleName,
Token label,
GrammarAST tokenRef) |
|
Collection<String> |
LeftRecursiveRuleAnalyzer.getNamesFromArgAction(Token t) |
|
static void |
ErrorManager.grammarError(int msgID,
Grammar g,
Token token) |
|
static void |
ErrorManager.grammarError(int msgID,
Grammar g,
Token token,
Object arg) |
|
static void |
ErrorManager.grammarError(int msgID,
Grammar g,
Token token,
Object arg,
Object arg2) |
|
static void |
ErrorManager.grammarWarning(int msgID,
Grammar g,
Token token) |
|
static void |
ErrorManager.grammarWarning(int msgID,
Grammar g,
Token token,
Object arg) |
|
static void |
ErrorManager.grammarWarning(int msgID,
Grammar g,
Token token,
Object arg,
Object arg2) |
|
void |
GrammarAST.initialize(Token token) |
|
String |
Grammar.setOption(String key,
Object value,
Token optionsStartToken) |
Save the option key/value pair and process it; return the key
or null if invalid option.
|
String |
Rule.setOption(String key,
Object value,
Token optionsStartToken) |
Save the option key/value pair and process it; return the key
or null if invalid option.
|
void |
Grammar.setOptions(Map<String,Object> options,
Token optionsStartToken) |
|
void |
Rule.setOptions(Map<String,Object> options,
Token optionsStartToken) |
|
void |
GrammarAST.setTokenBoundaries(Token startToken,
Token stopToken) |
Track start/stop token for subtree root created for a rule.
|
static void |
ErrorManager.syntaxError(int msgID,
Grammar grammar,
Token token,
Object arg,
RecognitionException re) |
Modifier and Type | Method | Description |
---|---|---|
static void |
ErrorManager.insufficientPredicates(DecisionProbe probe,
DFAState d,
Map<Integer,Set<Token>> altToUncoveredLocations) |
Constructor | Description |
---|---|
AttributeScope(String name,
Token derivedFromToken) |
|
AttributeScope(Grammar grammar,
String name,
Token derivedFromToken) |
|
GrammarAST(Token token) |
|
GrammarSemanticsMessage(int msgID,
Grammar g,
Token offendingToken) |
|
GrammarSemanticsMessage(int msgID,
Grammar g,
Token offendingToken,
Object arg) |
|
GrammarSemanticsMessage(int msgID,
Grammar g,
Token offendingToken,
Object arg,
Object arg2) |
|
GrammarSyntaxMessage(int msgID,
Grammar grammar,
Token offendingToken,
Object arg,
RecognitionException exception) |
|
GrammarSyntaxMessage(int msgID,
Grammar grammar,
Token offendingToken,
RecognitionException exception) |
|
LabelElementPair(Token label,
GrammarAST elementRef) |
|
RuleLabelScope(Rule referencedRule,
Token actionToken) |
Constructor | Description |
---|---|
GrammarInsufficientPredicatesMessage(DecisionProbe probe,
DFAState problemState,
Map<Integer,Set<Token>> altToLocations) |
Copyright © 1992–2018 ANTLR. All rights reserved.