Skip to content

Commit 088b7af

Browse files
committed
Add support for parsing f-string as per PEP 701 (#7041)
This PR adds support for PEP 701 in the parser to use the new tokens emitted by the lexer to construct the f-string node. Without an official grammar, the f-strings were parsed manually. Now that we've the specification, that is being used in the LALRPOP to parse the f-strings. This file includes the logic for parsing string literals and joining the implicit string concatenation. Now that we don't require parsing f-strings manually a lot of code involving the same is removed. Earlier, there were 2 entry points to this module: * `parse_string`: Used to parse a single string literal * `parse_strings`: Used to parse strings which were implicitly concatenated Now, there are 3 entry points: * `parse_string_literal`: Renamed from `parse_string` * `parse_fstring_middle`: Used to parse a `FStringMiddle` token which is basically a string literal without the quotes * `concatenate_strings`: Renamed from `parse_strings` but now it takes the parsed nodes instead. So, we just need to concatenate them into a single node. > A short primer on `FStringMiddle` token: This includes the portion of text inside the f-string that's not part of the expression and isn't an opening or closing brace. For example, in `f"foo {bar:.3f{x}} bar"`, the `foo `, `.3f` and ` bar` are `FStringMiddle` token content. ***Discussion in the official implementation: python/cpython#102855 (comment) This change in the AST is when unicode strings (prefixed with `u`) and f-strings are used in an implicitly concatenated string value. For example, ```python u"foo" f"{bar}" "baz" " some" ``` Pre Python 3.12, the kind field would be assigned only if the prefix was on the first string. So, taking the above example, both `"foo"` and `"baz some"` (implicit concatenation) would be given the `u` kind: <details><summary>Pre 3.12 AST:</summary> <p> ```python Constant(value='foo', kind='u'), FormattedValue( value=Name(id='bar', ctx=Load()), conversion=-1), Constant(value='baz some', kind='u') ``` </p> </details> But, post Python 3.12, only the string with the `u` prefix will be assigned the value: <details><summary>Pre 3.12 AST:</summary> <p> ```python Constant(value='foo', kind='u'), FormattedValue( value=Name(id='bar', ctx=Load()), conversion=-1), Constant(value='baz some') ``` </p> </details> Here are some more iterations around the change: 1. `"foo" f"{bar}" u"baz" "no"` <details><summary>Pre 3.12</summary> <p> ```python Constant(value='foo'), FormattedValue( value=Name(id='bar', ctx=Load()), conversion=-1), Constant(value='bazno') ``` </p> </details> <details><summary>3.12</summary> <p> ```python Constant(value='foo'), FormattedValue( value=Name(id='bar', ctx=Load()), conversion=-1), Constant(value='bazno', kind='u') ``` </p> </details> 2. `"foo" f"{bar}" "baz" u"no"` <details><summary>Pre 3.12</summary> <p> ```python Constant(value='foo'), FormattedValue( value=Name(id='bar', ctx=Load()), conversion=-1), Constant(value='bazno') ``` </p> </details> <details><summary>3.12</summary> <p> ```python Constant(value='foo'), FormattedValue( value=Name(id='bar', ctx=Load()), conversion=-1), Constant(value='bazno') ``` </p> </details> 3. `u"foo" f"bar {baz} realy" u"bar" "no"` <details><summary>Pre 3.12</summary> <p> ```python Constant(value='foobar ', kind='u'), FormattedValue( value=Name(id='baz', ctx=Load()), conversion=-1), Constant(value=' realybarno', kind='u') ``` </p> </details> <details><summary>3.12</summary> <p> ```python Constant(value='foobar ', kind='u'), FormattedValue( value=Name(id='baz', ctx=Load()), conversion=-1), Constant(value=' realybarno') ``` </p> </details> With the hand written parser, we were able to provide better error messages in case of any errors such as the following but now they all are removed and in those cases an "unexpected token" error will be thrown by lalrpop: * A closing delimiter was not opened properly * An opening delimiter was not closed properly * Empty expression not allowed The "Too many nested expressions in an f-string" was removed and instead we can create a lint rule for that. And, "The f-string expression cannot include the given character" was removed because f-strings now support those characters which are mainly same quotes as the outer ones, escape sequences, comments, etc. 1. Refactor existing test cases to use `parse_suite` instead of `parse_fstrings` (doesn't exists anymore) 2. Additional test cases are added as required Updated the snapshots. The change from `parse_fstrings` to `parse_suite` means that the snapshot would produce the module node instead of just a list of f-string parts. I've manually verified that the parts are still the same along with the node ranges. #7263 (comment) fixes: #7043 fixes: #6835
1 parent c7e83fd commit 088b7af

31 files changed

+24099
-16245
lines changed

crates/ruff_benchmark/benches/formatter.rs

+1-1
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ fn benchmark_formatter(criterion: &mut Criterion) {
6565
let comment_ranges = comment_ranges.finish();
6666

6767
// Parse the AST.
68-
let python_ast = parse_tokens(tokens, Mode::Module, "<filename>")
68+
let python_ast = parse_tokens(tokens, case.code(), Mode::Module, "<filename>")
6969
.expect("Input to be a valid python program");
7070

7171
b.iter(|| {

crates/ruff_linter/src/linter.rs

+1
Original file line numberDiff line numberDiff line change
@@ -143,6 +143,7 @@ pub fn check_path(
143143
if use_ast || use_imports || use_doc_lines {
144144
match ruff_python_parser::parse_program_tokens(
145145
tokens,
146+
source_kind.source_code(),
146147
&path.to_string_lossy(),
147148
source_type.is_ipynb(),
148149
) {

crates/ruff_python_ast/src/nodes.rs

+8
Original file line numberDiff line numberDiff line change
@@ -2600,6 +2600,14 @@ impl Constant {
26002600
_ => false,
26012601
}
26022602
}
2603+
2604+
/// Returns `true` if the constant is a string constant that is a unicode string (i.e., `u"..."`).
2605+
pub fn is_unicode_string(&self) -> bool {
2606+
match self {
2607+
Constant::Str(value) => value.unicode,
2608+
_ => false,
2609+
}
2610+
}
26032611
}
26042612

26052613
#[derive(Clone, Debug, PartialEq, Eq)]

crates/ruff_python_ast/tests/preorder.rs

+1-1
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@ fn function_type_parameters() {
130130

131131
fn trace_preorder_visitation(source: &str) -> String {
132132
let tokens = lex(source, Mode::Module);
133-
let parsed = parse_tokens(tokens, Mode::Module, "test.py").unwrap();
133+
let parsed = parse_tokens(tokens, source, Mode::Module, "test.py").unwrap();
134134

135135
let mut visitor = RecordVisitor::default();
136136
visitor.visit_mod(&parsed);

crates/ruff_python_ast/tests/visitor.rs

+1-1
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ fn function_type_parameters() {
131131

132132
fn trace_visitation(source: &str) -> String {
133133
let tokens = lex(source, Mode::Module);
134-
let parsed = parse_tokens(tokens, Mode::Module, "test.py").unwrap();
134+
let parsed = parse_tokens(tokens, source, Mode::Module, "test.py").unwrap();
135135

136136
let mut visitor = RecordVisitor::default();
137137
walk_module(&mut visitor, &parsed);

crates/ruff_python_formatter/src/cli.rs

+1-1
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ pub fn format_and_debug_print(input: &str, cli: &Cli, source_type: &Path) -> Res
5757

5858
// Parse the AST.
5959
let python_ast =
60-
parse_tokens(tokens, Mode::Module, "<filename>").context("Syntax error in input")?;
60+
parse_tokens(tokens, input, Mode::Module, "<filename>").context("Syntax error in input")?;
6161

6262
let options = PyFormatOptions::from_extension(source_type);
6363
let formatted = format_node(&python_ast, &comment_ranges, input, options)

crates/ruff_python_formatter/src/comments/mod.rs

+1-1
Original file line numberDiff line numberDiff line change
@@ -575,7 +575,7 @@ mod tests {
575575

576576
let comment_ranges = comment_ranges.finish();
577577

578-
let parsed = parse_tokens(tokens, Mode::Module, "test.py")
578+
let parsed = parse_tokens(tokens, code, Mode::Module, "test.py")
579579
.expect("Expect source to be valid Python");
580580

581581
CommentsTestCase {

crates/ruff_python_formatter/src/lib.rs

+2-2
Original file line numberDiff line numberDiff line change
@@ -142,7 +142,7 @@ pub fn format_module(
142142
let comment_ranges = comment_ranges.finish();
143143

144144
// Parse the AST.
145-
let python_ast = parse_tokens(tokens, Mode::Module, "<filename>")?;
145+
let python_ast = parse_tokens(tokens, contents, Mode::Module, "<filename>")?;
146146

147147
let formatted = format_node(&python_ast, &comment_ranges, contents, options)?;
148148

@@ -242,7 +242,7 @@ def main() -> None:
242242

243243
// Parse the AST.
244244
let source_path = "code_inline.py";
245-
let python_ast = parse_tokens(tokens, Mode::Module, source_path).unwrap();
245+
let python_ast = parse_tokens(tokens, src, Mode::Module, source_path).unwrap();
246246
let options = PyFormatOptions::from_extension(Path::new(source_path));
247247
let formatted = format_node(&python_ast, &comment_ranges, src, options).unwrap();
248248

crates/ruff_python_parser/src/lib.rs

+2-1
Original file line numberDiff line numberDiff line change
@@ -146,6 +146,7 @@ pub fn tokenize(contents: &str, mode: Mode) -> Vec<LexResult> {
146146
/// Parse a full Python program from its tokens.
147147
pub fn parse_program_tokens(
148148
lxr: Vec<LexResult>,
149+
source: &str,
149150
source_path: &str,
150151
is_jupyter_notebook: bool,
151152
) -> anyhow::Result<Suite, ParseError> {
@@ -154,7 +155,7 @@ pub fn parse_program_tokens(
154155
} else {
155156
Mode::Module
156157
};
157-
match parse_tokens(lxr, mode, source_path)? {
158+
match parse_tokens(lxr, source, mode, source_path)? {
158159
Mod::Module(m) => Ok(m.body),
159160
Mod::Expression(_) => unreachable!("Mode::Module doesn't return other variant"),
160161
}

crates/ruff_python_parser/src/parser.rs

+55-6
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ use ruff_python_ast::{Mod, ModModule, Suite};
5050
/// ```
5151
pub fn parse_program(source: &str, source_path: &str) -> Result<ModModule, ParseError> {
5252
let lexer = lex(source, Mode::Module);
53-
match parse_tokens(lexer, Mode::Module, source_path)? {
53+
match parse_tokens(lexer, source, Mode::Module, source_path)? {
5454
Mod::Module(m) => Ok(m),
5555
Mod::Expression(_) => unreachable!("Mode::Module doesn't return other variant"),
5656
}
@@ -78,7 +78,7 @@ pub fn parse_suite(source: &str, source_path: &str) -> Result<Suite, ParseError>
7878
/// ```
7979
pub fn parse_expression(source: &str, source_path: &str) -> Result<ast::Expr, ParseError> {
8080
let lexer = lex(source, Mode::Expression);
81-
match parse_tokens(lexer, Mode::Expression, source_path)? {
81+
match parse_tokens(lexer, source, Mode::Expression, source_path)? {
8282
Mod::Expression(expression) => Ok(*expression.body),
8383
Mod::Module(_m) => unreachable!("Mode::Expression doesn't return other variant"),
8484
}
@@ -107,7 +107,7 @@ pub fn parse_expression_starts_at(
107107
offset: TextSize,
108108
) -> Result<ast::Expr, ParseError> {
109109
let lexer = lex_starts_at(source, Mode::Module, offset);
110-
match parse_tokens(lexer, Mode::Expression, source_path)? {
110+
match parse_tokens(lexer, source, Mode::Expression, source_path)? {
111111
Mod::Expression(expression) => Ok(*expression.body),
112112
Mod::Module(_m) => unreachable!("Mode::Expression doesn't return other variant"),
113113
}
@@ -193,7 +193,7 @@ pub fn parse_starts_at(
193193
offset: TextSize,
194194
) -> Result<ast::Mod, ParseError> {
195195
let lxr = lexer::lex_starts_at(source, mode, offset);
196-
parse_tokens(lxr, mode, source_path)
196+
parse_tokens(lxr, source, mode, source_path)
197197
}
198198

199199
/// Parse an iterator of [`LexResult`]s using the specified [`Mode`].
@@ -208,32 +208,37 @@ pub fn parse_starts_at(
208208
/// ```
209209
/// use ruff_python_parser::{lexer::lex, Mode, parse_tokens};
210210
///
211-
/// let expr = parse_tokens(lex("1 + 2", Mode::Expression), Mode::Expression, "<embedded>");
211+
/// let source = "1 + 2";
212+
/// let expr = parse_tokens(lex(source, Mode::Expression), source, Mode::Expression, "<embedded>");
212213
/// assert!(expr.is_ok());
213214
/// ```
214215
pub fn parse_tokens(
215216
lxr: impl IntoIterator<Item = LexResult>,
217+
source: &str,
216218
mode: Mode,
217219
source_path: &str,
218220
) -> Result<ast::Mod, ParseError> {
219221
let lxr = lxr.into_iter();
220222

221223
parse_filtered_tokens(
222224
lxr.filter_ok(|(tok, _)| !matches!(tok, Tok::Comment { .. } | Tok::NonLogicalNewline)),
225+
source,
223226
mode,
224227
source_path,
225228
)
226229
}
227230

228231
fn parse_filtered_tokens(
229232
lxr: impl IntoIterator<Item = LexResult>,
233+
source: &str,
230234
mode: Mode,
231235
source_path: &str,
232236
) -> Result<ast::Mod, ParseError> {
233237
let marker_token = (Tok::start_marker(mode), TextRange::default());
234238
let lexer = iter::once(Ok(marker_token)).chain(lxr);
235239
python::TopParser::new()
236240
.parse(
241+
source,
237242
mode,
238243
lexer
239244
.into_iter()
@@ -1237,11 +1242,55 @@ a = 1
12371242
"#
12381243
.trim();
12391244
let lxr = lexer::lex_starts_at(source, Mode::Ipython, TextSize::default());
1240-
let parse_err = parse_tokens(lxr, Mode::Module, "<test>").unwrap_err();
1245+
let parse_err = parse_tokens(lxr, source, Mode::Module, "<test>").unwrap_err();
12411246
assert_eq!(
12421247
parse_err.to_string(),
12431248
"IPython escape commands are only allowed in `Mode::Ipython` at byte offset 6"
12441249
.to_string()
12451250
);
12461251
}
1252+
1253+
#[test]
1254+
fn test_fstrings() {
1255+
let parse_ast = parse_suite(
1256+
r#"
1257+
f"{" f"}"
1258+
f"{foo!s}"
1259+
f"{3,}"
1260+
f"{3!=4:}"
1261+
f'{3:{"}"}>10}'
1262+
f'{3:{"{"}>10}'
1263+
f"{ foo = }"
1264+
f"{ foo = :.3f }"
1265+
f"{ foo = !s }"
1266+
f"{ 1, 2 = }"
1267+
f'{f"{3.1415=:.1f}":*^20}'
1268+
1269+
{"foo " f"bar {x + y} " "baz": 10}
1270+
match foo:
1271+
case "foo " f"bar {x + y} " "baz":
1272+
pass
1273+
"#
1274+
.trim(),
1275+
"<test>",
1276+
)
1277+
.unwrap();
1278+
insta::assert_debug_snapshot!(parse_ast);
1279+
}
1280+
1281+
#[test]
1282+
fn test_fstrings_with_unicode() {
1283+
let parse_ast = parse_suite(
1284+
r#"
1285+
u"foo" f"{bar}" "baz" " some"
1286+
"foo" f"{bar}" u"baz" " some"
1287+
"foo" f"{bar}" "baz" u" some"
1288+
u"foo" f"bar {baz} really" u"bar" "no"
1289+
"#
1290+
.trim(),
1291+
"<test>",
1292+
)
1293+
.unwrap();
1294+
insta::assert_debug_snapshot!(parse_ast);
1295+
}
12471296
}

0 commit comments

Comments
 (0)