Indirect Speech Definition and Examples

  • An Introduction to Punctuation
  • Ph.D., Rhetoric and English, University of Georgia
  • M.A., Modern English and American Literature, University of Leicester
  • B.A., English, State University of New York

Indirect speech is a report on what someone else said or wrote without using that person's exact words (which is called direct speech). It's also called indirect discourse or reported speech . 

Direct vs. Indirect Speech

In direct speech , a person's exact words are placed in quotation marks and set off with a comma and a reporting clause or signal phrase , such as "said" or "asked." In fiction writing, using direct speech can display the emotion of an important scene in vivid detail through the words themselves as well as the description of how something was said. In nonfiction writing or journalism, direct speech can emphasize a particular point, by using a source's exact words.

Indirect speech is paraphrasing what someone said or wrote. In writing, it functions to move a piece along by boiling down points that an interview source made. Unlike direct speech, indirect speech is  not  usually placed inside quote marks. However, both are attributed to the speaker because they come directly from a source.

How to Convert

In the first example below, the  verb  in the  present tense  in the line of direct speech ( is)  may change to the  past tense  ( was ) in indirect speech, though it doesn't necessarily have to with a present-tense verb. If it makes sense in context to keep it present tense, that's fine.

  • Direct speech:   "Where is your textbook? " the teacher asked me.
  • Indirect speech:  The teacher asked me  where my textbook was.
  • Indirect speech: The teacher asked me where my textbook is.

Keeping the present tense in reported speech can give the impression of immediacy, that it's being reported soon after the direct quote,such as:

  • Direct speech:  Bill said, "I can't come in today, because I'm sick."
  • Indirect speech:  Bill said (that) he can't come in today because he's sick.

Future Tense

An action in the future (present continuous tense or future) doesn't have to change verb tense, either, as these examples demonstrate.

  • Direct speech:  Jerry said, "I'm going to buy a new car."
  • Indirect speech:  Jerry said (that) he's going to buy a new car.
  • Direct speech:  Jerry said, "I will buy a new car."
  • Indirect speech:  Jerry said (that) he will buy a new car.

Indirectly reporting an action in the future can change verb tenses when needed. In this next example, changing the  am going  to was going implies that she has already left for the mall. However, keeping the tense progressive or continuous implies that the action continues, that she's still at the mall and not back yet.

  • Direct speech:  She said, "I'm going to the mall."
  • Indirect speech:  She said (that) she was going to the mall.
  • Indirect speech: She said (that) she is going to the mall.

Other Changes

With a past-tense verb in the direct quote, the verb changes to past perfect.

  • Direct speech:  She said,  "I went to the mall."
  • Indirect speech:  She said (that)  she had gone to the mall.

Note the change in first person (I) and second person (your)  pronouns  and  word order  in the indirect versions. The person has to change because the one reporting the action is not the one actually doing it. Third person (he or she) in direct speech remains in the third person.

Free Indirect Speech

In free indirect speech, which is commonly used in fiction, the reporting clause (or signal phrase) is omitted. Using the technique is a way to follow a character's point of view—in third-person limited omniscient—and show her thoughts intermingled with narration.

Typically in fiction italics show a character's exact thoughts, and quote marks show dialogue. Free indirect speech makes do without the italics and simply combines the internal thoughts of the character with the narration of the story. Writers who have used this technique include James Joyce, Jane Austen, Virginia Woolf, Henry James, Zora Neale Hurston, and D.H. Lawrence.  

  • Indirect Speech in the English Language
  • Direct Speech Definition and Examples
  • French Grammar: Direct and Indirect Speech
  • How to Teach Reported Speech
  • Definition and Examples of Direct Quotations
  • How to Use Indirect Quotations in Writing for Complete Clarity
  • Backshift (Sequence-of-Tense Rule in Grammar)
  • What Is Attribution in Writing?
  • Indirect Question: Definition and Examples
  • Reported Speech
  • Using Reported Speech: ESL Lesson Plan
  • Constructed Dialogue in Storytelling and Conversation
  • The Subjunctive Present in German
  • What Are Reporting Verbs in English Grammar?
  • Preterit(e) Verbs
  • Dialogue Guide Definition and Examples

ESL Grammar

Direct and Indirect Speech: Useful Rules and Examples

Are you having trouble understanding the difference between direct and indirect speech? Direct speech is when you quote someone’s exact words, while indirect speech is when you report what someone said without using their exact words. This can be a tricky concept to grasp, but with a little practice, you’ll be able to use both forms of speech with ease.

Direct and Indirect Speech

Direct and Indirect Speech

When someone speaks, we can report what they said in two ways: direct speech and indirect speech. Direct speech is when we quote the exact words that were spoken, while indirect speech is when we report what was said without using the speaker’s exact words. Here’s an example:

Direct speech: “I love pizza,” said John. Indirect speech: John said that he loved pizza.

Using direct speech can make your writing more engaging and can help to convey the speaker’s tone and emotion. However, indirect speech can be useful when you want to summarize what someone said or when you don’t have the exact words that were spoken.

To change direct speech to indirect speech, you need to follow some rules. Firstly, you need to change the tense of the verb in the reported speech to match the tense of the reporting verb. Secondly, you need to change the pronouns and adverbs in the reported speech to match the new speaker. Here’s an example:

Direct speech: “I will go to the park,” said Sarah. Indirect speech: Sarah said that she would go to the park.

It’s important to note that when you use indirect speech, you need to use reporting verbs such as “said,” “told,” or “asked” to indicate who is speaking. Here’s an example:

Direct speech: “What time is it?” asked Tom. Indirect speech: Tom asked what time it was.

In summary, understanding direct and indirect speech is crucial for effective communication and writing. Direct speech can be used to convey the speaker’s tone and emotion, while indirect speech can be useful when summarizing what someone said. By following the rules for changing direct speech to indirect speech, you can accurately report what was said while maintaining clarity and readability in your writing.

Differences between Direct and Indirect Speech

When it comes to reporting speech, there are two ways to go about it: direct and indirect speech. Direct speech is when you report someone’s exact words, while indirect speech is when you report what someone said without using their exact words. Here are some of the key differences between direct and indirect speech:

Change of Pronouns

In direct speech, the pronouns used are those of the original speaker. However, in indirect speech, the pronouns have to be changed to reflect the perspective of the reporter. For example:

  • Direct speech: “I am going to the store,” said John.
  • Indirect speech: John said he was going to the store.

In the above example, the pronoun “I” changes to “he” in indirect speech.

Change of Tenses

Another major difference between direct and indirect speech is the change of tenses. In direct speech, the verb tense used is the same as that used by the original speaker. However, in indirect speech, the verb tense may change depending on the context. For example:

  • Direct speech: “I am studying for my exams,” said Sarah.
  • Indirect speech: Sarah said she was studying for her exams.

In the above example, the present continuous tense “am studying” changes to the past continuous tense “was studying” in indirect speech.

Change of Time and Place References

When reporting indirect speech, the time and place references may also change. For example:

  • Direct speech: “I will meet you at the park tomorrow,” said Tom.
  • Indirect speech: Tom said he would meet you at the park the next day.

In the above example, “tomorrow” changes to “the next day” in indirect speech.

Overall, it is important to understand the differences between direct and indirect speech to report speech accurately and effectively. By following the rules of direct and indirect speech, you can convey the intended message of the original speaker.

Converting Direct Speech Into Indirect Speech

When you need to report what someone said in your own words, you can use indirect speech. To convert direct speech into indirect speech, you need to follow a few rules.

Step 1: Remove the Quotation Marks

The first step is to remove the quotation marks that enclose the relayed text. This is because indirect speech does not use the exact words of the speaker.

Step 2: Use a Reporting Verb and a Linker

To indicate that you are reporting what someone said, you need to use a reporting verb such as “said,” “asked,” “told,” or “exclaimed.” You also need to use a linker such as “that” or “whether” to connect the reporting verb to the reported speech.

For example:

  • Direct speech: “I love ice cream,” said Mary.
  • Indirect speech: Mary said that she loved ice cream.

Step 3: Change the Tense of the Verb

When you use indirect speech, you need to change the tense of the verb in the reported speech to match the tense of the reporting verb.

  • Indirect speech: John said that he was going to the store.

Step 4: Change the Pronouns

You also need to change the pronouns in the reported speech to match the subject of the reporting verb.

  • Direct speech: “Are you busy now?” Tina asked me.
  • Indirect speech: Tina asked whether I was busy then.

By following these rules, you can convert direct speech into indirect speech and report what someone said in your own words.

Converting Indirect Speech Into Direct Speech

Converting indirect speech into direct speech involves changing the reported speech to its original form as spoken by the speaker. Here are the steps to follow when converting indirect speech into direct speech:

  • Identify the reporting verb: The first step is to identify the reporting verb used in the indirect speech. This will help you determine the tense of the direct speech.
  • Change the pronouns: The next step is to change the pronouns in the indirect speech to match the person speaking in the direct speech. For example, if the indirect speech is “She said that she was going to the store,” the direct speech would be “I am going to the store,” if you are the person speaking.
  • Change the tense: Change the tense of the verbs in the indirect speech to match the tense of the direct speech. For example, if the indirect speech is “He said that he would visit tomorrow,” the direct speech would be “He says he will visit tomorrow.”
  • Remove the reporting verb and conjunction: In direct speech, there is no need for a reporting verb or conjunction. Simply remove them from the indirect speech to get the direct speech.

Here is an example to illustrate the process:

Indirect Speech: John said that he was tired and wanted to go home.

Direct Speech: “I am tired and want to go home,” John said.

By following these steps, you can easily convert indirect speech into direct speech.

Examples of Direct and Indirect Speech

Direct and indirect speech are two ways to report what someone has said. Direct speech reports the exact words spoken by a person, while indirect speech reports the meaning of what was said. Here are some examples of both types of speech:

Direct Speech Examples

Direct speech is used when you want to report the exact words spoken by someone. It is usually enclosed in quotation marks and is often used in dialogue.

  • “I am going to the store,” said Sarah.
  • “It’s a beautiful day,” exclaimed John.
  • “Please turn off the lights,” Mom told me.
  • “I will meet you at the library,” said Tom.
  • “We are going to the beach tomorrow,” announced Mary.

Indirect Speech Examples

Indirect speech, also known as reported speech, is used to report what someone said without using their exact words. It is often used in news reports, academic writing, and in situations where you want to paraphrase what someone said.

Here are some examples of indirect speech:

  • Sarah said that she was going to the store.
  • John exclaimed that it was a beautiful day.
  • Mom told me to turn off the lights.
  • Tom said that he would meet me at the library.
  • Mary announced that they were going to the beach tomorrow.

In indirect speech, the verb tense may change to reflect the time of the reported speech. For example, “I am going to the store” becomes “Sarah said that she was going to the store.” Additionally, the pronouns and possessive adjectives may also change to reflect the speaker and the person being spoken about.

Overall, both direct and indirect speech are important tools for reporting what someone has said. By using these techniques, you can accurately convey the meaning of what was said while also adding your own interpretation and analysis.

Frequently Asked Questions

What is direct and indirect speech?

Direct and indirect speech refer to the ways in which we communicate what someone has said. Direct speech involves repeating the exact words spoken, using quotation marks to indicate that you are quoting someone. Indirect speech, on the other hand, involves reporting what someone has said without using their exact words.

How do you convert direct speech to indirect speech?

To convert direct speech to indirect speech, you need to change the tense of the verbs, pronouns, and time expressions. You also need to introduce a reporting verb, such as “said,” “told,” or “asked.” For example, “I love ice cream,” said Mary (direct speech) can be converted to “Mary said that she loved ice cream” (indirect speech).

What is the difference between direct speech and indirect speech?

The main difference between direct speech and indirect speech is that direct speech uses the exact words spoken, while indirect speech reports what someone has said without using their exact words. Direct speech is usually enclosed in quotation marks, while indirect speech is not.

What are some examples of direct and indirect speech?

Some examples of direct speech include “I am going to the store,” said John and “I love pizza,” exclaimed Sarah. Some examples of indirect speech include John said that he was going to the store and Sarah exclaimed that she loved pizza .

What are the rules for converting direct speech to indirect speech?

The rules for converting direct speech to indirect speech include changing the tense of the verbs, pronouns, and time expressions. You also need to introduce a reporting verb and use appropriate reporting verbs such as “said,” “told,” or “asked.”

What is a summary of direct and indirect speech?

Direct and indirect speech are two ways of reporting what someone has said. Direct speech involves repeating the exact words spoken, while indirect speech reports what someone has said without using their exact words. To convert direct speech to indirect speech, you need to change the tense of the verbs, pronouns, and time expressions and introduce a reporting verb.

You might also like:

  • List of Adjectives
  • Predicate Adjective
  • Superlative Adjectives

Related Posts:

Metaphor Painting Pictures with Words

This website is AMNAZING

okyes boomer

MY NAAMEE IS KISHU AND I WANTED TO TELL THERE ARE NO EXERCISES AVAILLABLEE BY YOUR WEBSITE PLEASE ADD THEM SSOON FOR OUR STUDENTS CONVIENCE IM A EIGHT GRADER LOVED YOUR EXPLABATIO

blessings

sure cries l miss my friend

Reported Speech in English Grammar

Direct speech, changing the tense (backshift), no change of tenses, question sentences, demands/requests, expressions with who/what/how + infinitive, typical changes of time and place.

  • Lingolia Plus English

Introduction

In English grammar, we use reported speech to say what another person has said. We can use their exact words with quotation marks , this is known as direct speech , or we can use indirect speech . In indirect speech , we change the tense and pronouns to show that some time has passed. Indirect speech is often introduced by a reporting verb or phrase such as ones below.

Learn the rules for writing indirect speech in English with Lingolia’s simple explanation. In the exercises, you can test your grammar skills.

When turning direct speech into indirect speech, we need to pay attention to the following points:

  • changing the pronouns Example: He said, “ I saw a famous TV presenter.” He said (that) he had seen a famous TV presenter.
  • changing the information about time and place (see the table at the end of this page) Example: He said, “I saw a famous TV presenter here yesterday .” He said (that) he had seen a famous TV presenter there the day before .
  • changing the tense (backshift) Example: He said, “She was eating an ice-cream at the table where you are sitting .” He said (that) she had been eating an ice-cream at the table where I was sitting .

If the introductory clause is in the simple past (e.g. He said ), the tense has to be set back by one degree (see the table). The term for this in English is backshift .

The verbs could, should, would, might, must, needn’t, ought to, used to normally do not change.

If the introductory clause is in the simple present , however (e.g. He says ), then the tense remains unchanged, because the introductory clause already indicates that the statement is being immediately repeated (and not at a later point in time).

In some cases, however, we have to change the verb form.

When turning questions into indirect speech, we have to pay attention to the following points:

  • As in a declarative sentence, we have to change the pronouns, the time and place information, and set the tense back ( backshift ).
  • Instead of that , we use a question word. If there is no question word, we use whether / if instead. Example: She asked him, “ How often do you work?” → She asked him how often he worked. He asked me, “Do you know any famous people?” → He asked me if/whether I knew any famous people.
  • We put the subject before the verb in question sentences. (The subject goes after the auxiliary verb in normal questions.) Example: I asked him, “ Have you met any famous people before?” → I asked him if/whether he had met any famous people before.
  • We don’t use the auxiliary verb do for questions in indirect speech. Therefore, we sometimes have to conjugate the main verb (for third person singular or in the simple past ). Example: I asked him, “What do you want to tell me?” → I asked him what he wanted to tell me.
  • We put the verb directly after who or what in subject questions. Example: I asked him, “ Who is sitting here?” → I asked him who was sitting there.

We don’t just use indirect questions to report what another person has asked. We also use them to ask questions in a very polite manner.

When turning demands and requests into indirect speech, we only need to change the pronouns and the time and place information. We don’t have to pay attention to the tenses – we simply use an infinitive .

If it is a negative demand, then in indirect speech we use not + infinitive .

To express what someone should or can do in reported speech, we leave out the subject and the modal verb and instead we use the construction who/what/where/how + infinitive.

Say or Tell?

The words say and tell are not interchangeable. say = say something tell = say something to someone

How good is your English?

Find out with Lingolia’s free grammar test

Take the test!

Maybe later

  • B1-B2 grammar

Reported speech: statements

Reported speech: statements

Do you know how to report what somebody else said? Test what you know with interactive exercises and read the explanation to help you.

Look at these examples to see how we can tell someone what another person said.

direct speech: 'I love the Toy Story films,' she said. indirect speech: She said she loved the Toy Story films. direct speech: 'I worked as a waiter before becoming a chef,' he said. indirect speech: He said he'd worked as a waiter before becoming a chef. direct speech: 'I'll phone you tomorrow,' he said. indirect speech: He said he'd phone me the next day.

Try this exercise to test your grammar.

Grammar B1-B2: Reported speech 1: 1

Read the explanation to learn more.

Grammar explanation

Reported speech is when we tell someone what another person said. To do this, we can use direct speech or indirect speech.

direct speech: 'I work in a bank,' said Daniel. indirect speech: Daniel said that he worked in a bank.

In indirect speech, we often use a tense which is 'further back' in the past (e.g. worked ) than the tense originally used (e.g. work ). This is called 'backshift'. We also may need to change other words that were used, for example pronouns.

Present simple, present continuous and present perfect

When we backshift, present simple changes to past simple, present continuous changes to past continuous and present perfect changes to past perfect.

'I travel a lot in my job.' Jamila said that she travelled a lot in her job. 'The baby's sleeping!' He told me the baby was sleeping. 'I've hurt my leg.' She said she'd hurt her leg.

Past simple and past continuous

When we backshift, past simple usually changes to past perfect simple, and past continuous usually changes to past perfect continuous.

'We lived in China for five years.' She told me they'd lived in China for five years. 'It was raining all day.' He told me it had been raining all day.

Past perfect

The past perfect doesn't change.

'I'd tried everything without success, but this new medicine is great.' He said he'd tried everything without success, but the new medicine was great.

No backshift

If what the speaker has said is still true or relevant, it's not always necessary to change the tense. This might happen when the speaker has used a present tense.

'I go to the gym next to your house.' Jenny told me that she goes to the gym next to my house. I'm thinking about going with her. 'I'm working in Italy for the next six months.' He told me he's working in Italy for the next six months. Maybe I should visit him! 'I've broken my arm!' She said she's broken her arm, so she won't be at work this week.

Pronouns, demonstratives and adverbs of time and place

Pronouns also usually change in indirect speech.

'I enjoy working in my garden,' said Bob. Bob said that he enjoyed working in his garden. 'We played tennis for our school,' said Alina. Alina told me they'd played tennis for their school.

However, if you are the person or one of the people who spoke, then the pronouns don't change.

'I'm working on my thesis,' I said. I told her that I was working on my thesis. 'We want our jobs back!' we said. We said that we wanted our jobs back.

We also change demonstratives and adverbs of time and place if they are no longer accurate.

'This is my house.' He said this was his house. [You are currently in front of the house.] He said that was his house. [You are not currently in front of the house.] 'We like it here.' She told me they like it here. [You are currently in the place they like.] She told me they like it there. [You are not in the place they like.] 'I'm planning to do it today.' She told me she's planning to do it today. [It is currently still the same day.] She told me she was planning to do it that day. [It is not the same day any more.]

In the same way, these changes to those , now changes to then , yesterday changes to the day before , tomorrow changes to the next/following day and ago changes to before .

Do this exercise to test your grammar again.

Grammar B1-B2: Reported speech 1: 2

Language level

Hello Team. If the reporting verb is in the present perfect, do we have to backshift the tenses of the direct speech or not?    For example: He has said, "I bought a car yesterday."    

1- He has said that he bought a car yesterday.

2- He has said that he had bought a car the previous day.

  • Log in or register to post comments

Hello Ahmed Imam,

It's not necessary to backshift the verb form if the situation being reported is still true. For example:

"I'm a doctor"

She told me she is a doctor. [she was a doctor when she said it and she is still doctor now]

She told me she was a doctor. [she was a doctor when she said it and may or may not still be a doctor now]

The reporting verb in your example would be 'said' rather than 'has said' as we are talking about a particular moment in the past. For the other verb both 'bought' and 'had bought' are possible without any change in meaning. In fact, when the verb is past in the original sentence we usually do not shift the verb form back.

The LearnEnglish Team

Hello again. Which one is correct? Why?

- He has said that he (will - would) travel to Cairo with his father.

The present perfect is a present form, so generally 'will' is the correct form.

In this case, assuming that the man said 'I will travel to Cairo', then 'will' is the correct form. But if the man said 'I would travel to Cairo if I had time to do it', then 'would' would be the correct form since it is part of a conditional statement.

I think you were asking about the first situation (the general one), though. Does that make sense?

Best wishes, Kirk LearnEnglish team

Thank you for the information. It states that If what the speaker has said is still true or relevant, it's not always necessary to change the tense. I wonder if it is still correct to change the tense in this example: 'London is in the UK', he said. to He said London was in the UK. Or  it has to be the present tense. 

Hello Wen1996,

Yes, your version of the sentence is also correct. In this case, the past tense refers to the time the speaker made this statement. But this doesn't mean the statement isn't also true now.

Good evening from Turkey.

Is the following example correct: Question: When did she watch the movie?

She asked me when she had watched the movie. or is it had she watched the movie. 

Do Subjects come before the verbs? Thank you. 

Hello muratt,

This is a reported question, not an actual question, as you can see from the fact that it has no question mark at the end. Therefore no inversion is needed and the normal subject-verb word order is maintained: ...she had watched... is correct.

You can read more about this here:

https://learnenglish.britishcouncil.org/grammar/b1-b2-grammar/reported-speech-questions

Thank you for your response.

Hello Sir, kindly help with the following sentence-

She said, "When I was a child I wasn't afraid of ghosts." 

Please tell me how to write this sentence in reported/ indirect speech.

Online courses

Footer:Live classes

Group and one-to-one classes with expert teachers.

Footer:Self-study

Learn English in your own time, at your own pace.

Footer:Personalised Tutor

One-to-one sessions focused on a personal plan.

Footer:IELTS preparation

Get the score you need with private and group classes.  

Reported Speech

Perfect english grammar.

words used in indirect speech

Reported Statements

Here's how it works:

We use a 'reporting verb' like 'say' or 'tell'. ( Click here for more about using 'say' and 'tell' .) If this verb is in the present tense, it's easy. We just put 'she says' and then the sentence:

  • Direct speech: I like ice cream.
  • Reported speech: She says (that) she likes ice cream.

We don't need to change the tense, though probably we do need to change the 'person' from 'I' to 'she', for example. We also may need to change words like 'my' and 'your'. (As I'm sure you know, often, we can choose if we want to use 'that' or not in English. I've put it in brackets () to show that it's optional. It's exactly the same if you use 'that' or if you don't use 'that'.)

But , if the reporting verb is in the past tense, then usually we change the tenses in the reported speech:

  • Reported speech: She said (that) she liked ice cream.

* doesn't change.

  • Direct speech: The sky is blue.
  • Reported speech: She said (that) the sky is/was blue.

Click here for a mixed tense exercise about practise reported statements. Click here for a list of all the reported speech exercises.

Reported Questions

So now you have no problem with making reported speech from positive and negative sentences. But how about questions?

  • Direct speech: Where do you live?
  • Reported speech: She asked me where I lived.
  • Direct speech: Where is Julie?
  • Reported speech: She asked me where Julie was.
  • Direct speech: Do you like chocolate?
  • Reported speech: She asked me if I liked chocolate.

Click here to practise reported 'wh' questions. Click here to practise reported 'yes / no' questions. Reported Requests

There's more! What if someone asks you to do something (in a polite way)? For example:

  • Direct speech: Close the window, please
  • Or: Could you close the window please?
  • Or: Would you mind closing the window please?
  • Reported speech: She asked me to close the window.
  • Direct speech: Please don't be late.
  • Reported speech: She asked us not to be late.

Reported Orders

  • Direct speech: Sit down!
  • Reported speech: She told me to sit down.
  • Click here for an exercise to practise reported requests and orders.
  • Click here for an exercise about using 'say' and 'tell'.
  • Click here for a list of all the reported speech exercises.

Seonaid Beckwith

Hello! I'm Seonaid! I'm here to help you understand grammar and speak correct, fluent English.

method graphic

Read more about our learning method

My English Grammar

Ultimate English Grammar, Vocabulary, and Names Database

Changes in Indirect Speech

Welcome to a comprehensive tutorial providing guidance on the proper use, types, and rules of indirect speech in English grammar. Indirect speech, also called reported speech, allows us to share another person’s exact words without using quotes. It is particularly useful in written language. This tutorial aims to brief you about the changes that occur when switching from direct speech to indirect speech. It further explains the necessary rules which must be followed during this transition.

Table of Contents

Understanding Direct and Indirect Speech

Direct speech refers to the exact wording that someone uses when performing an act of speech. However, indirect speech implicitly shares the content of the person’s original words.

Direct Speech: He said, “I am hungry.” Indirect Speech: He said that he was hungry.

Notably, an essential component of indirect speech is the change in verb tense. In the direct speech example, the speaker uses the present tense “am.” In the indirect version, even though the speaker is still hungry, the tense changes to the past “was.”

Changes in Verb Tenses

The verb tense in indirect speech is one step back in time from the tense in the direct speech. Here are the common changes:

  • Present Simple becomes Past Simple.
  • Present Continuous becomes Past Continuous.
  • Present Perfect becomes Past Perfect.
  • Present Perfect Continuous becomes Past Perfect Continuous.
  • Past Simple becomes Past Perfect.

Direct: He says, “I need help.” Indirect: He said he needed help.

Direct: She is saying, “I am reading a book.” Indirect: She was saying that she was reading a book.

Changes in Time and Place References

Besides the tense, word usage for place and time often changes when converting from direct to indirect speech.

  • ‘Now’ changes to ‘then’.
  • ‘Today’ changes to ‘that day’.
  • ‘Yesterday’ turns into ‘the day before’ or ‘the previous day’.
  • ‘Tomorrow’ changes to ‘the next day’ or ‘the following day’.
  • ‘Last week/month/year’ switches to ‘the previous week/month/year’.
  • ‘Next week/month/year’ changes to ‘the following week/month/year’.
  • ‘Here’ turns into ‘there’.

Direct: He said, “I will do it tomorrow.” Indirect: He said that he would do it the next day.

Direct: She said, “I was here.”

Indirect: She said that she was there.

Changes in Modals

Modals also change when transforming direct speech into indirect speech. Here are some common changes:

  • ‘Can’ changes to ‘could’.
  • ‘May’ changes to ‘might’.
  • ‘Will’ changes to ‘would’.
  • ‘Shall’ changes to ‘should’.

Direct: She said, “I can play the piano.” Indirect: She said that she could play the piano.

Direct: He said, “I will go shopping.” Indirect: He said that he would go shopping.

Reporting Orders, Requests, and Questions

When reporting orders, requests, and questions, the structure also changes. The following is the structure:

  • ‘To’ + infinitive for orders.
  • Interrogative word + subject + verb for questions.
  • Could/Would + subject + verb for polite requests.

Direct: He said to her, “Close the door.” Indirect: He told her to close the door.

Direct: She asked, “Where is the station?” Indirect: She asked where the station was.

In conclusion, reported speech becomes easier to understand and use effectively with practice. Understanding the transition from direct to indirect speech is vital to expressing yourself accurately and professionally, especially in written English. This guide provides the foundational information for mastering the changes in indirect speech. Practice these rules to become more fluent and confident in your English communication skills.

Related Posts:

Changing Direct Speech to Indirect Speech

Leave a Reply Cancel reply

You must be logged in to post a comment.

Direct and Indirect Speech: The Ultimate Guide

Direct and Indirect Speech are the two ways of reporting what someone said. The use of both direct and indirect speech is crucial in effective communication and writing. Understanding the basics of direct and indirect speech is important, but mastering the advanced techniques of these two forms of speech can take your writing to the next level. In this article, we will explore direct and indirect speech in detail and provide you with a comprehensive guide that covers everything you need to know.

Table of Contents

What is Direct Speech?

Direct speech is a way of reporting what someone said using their exact words. Direct speech is typically enclosed in quotation marks to distinguish it from the writer’s own words. Here are some examples of direct speech:

  • “I am going to the store,” said John.
  • “I love ice cream,” exclaimed Mary.
  • “The weather is beautiful today,” said Sarah.

In direct speech, the exact words spoken by the speaker are used, and the tense and pronouns used in the quote are maintained. Punctuation is also important in direct speech. Commas are used to separate the quote from the reporting verb, and full stops, question marks, or exclamation marks are used at the end of the quote, depending on the tone of the statement.

What is Indirect Speech?

Indirect speech is a way of reporting what someone said using a paraphrased version of their words. In indirect speech, the writer rephrases the speaker’s words and incorporates them into the sentence. Here are some examples of indirect speech:

  • John said that he was going to the store.
  • Mary exclaimed that she loved ice cream.
  • Sarah said that the weather was beautiful that day.

In indirect speech, the tense and pronouns may change, depending on the context of the sentence. Indirect speech is not enclosed in quotation marks, and the use of reporting verbs is important.

Differences Between Direct and Indirect Speech

The structure of direct and indirect speech is different. Direct speech is presented in quotation marks, whereas indirect speech is incorporated into the sentence without quotation marks. The tenses and pronouns used in direct and indirect speech also differ. In direct speech, the tense and pronouns used in the quote are maintained, whereas, in indirect speech, they may change depending on the context of the sentence. Reporting verbs are also used differently in direct and indirect speech. In direct speech, they are used to introduce the quote, while in indirect speech, they are used to report what was said.

How to Convert Direct Speech to Indirect Speech

Converting direct speech to indirect speech involves changing the tense, pronouns, and reporting verb. Here are the steps involved in converting direct speech to indirect speech:

  • Remove the quotation marks.
  • Use a reporting verb to introduce the indirect speech.
  • Change the tense of the verb in the quote if necessary.
  • Change the pronouns if necessary.
  • Use the appropriate conjunction if necessary.

Here is an example of converting direct speech to indirect speech:

Direct speech: “I am going to the store,” said John. Indirect speech: John said that he was going to the store.

How to Convert Indirect Speech to Direct Speech

Converting indirect speech to direct speech involves using the same tense, pronouns, and reporting verb as the original quote. Here are the steps involved in converting indirect speech to direct speech:

  • Remove the reporting verb.
  • Use quotation marks to enclose the direct speech.
  • Maintain the tense of the verb in the quote.
  • Use the same pronouns as the original quote.

Here is an example of converting indirect speech to direct speech:

Indirect speech: John said that he was going to the store. Direct speech: “I am going to the store,” said John.

Advanced Techniques for Using Direct and Indirect Speech

Using direct and indirect speech effectively can add depth and complexity to your writing. Here are some advanced techniques for using direct and indirect speech:

Blending Direct and Indirect Speech

Blending direct and indirect speech involves using both forms of speech in a single sentence or paragraph. This technique can create a more engaging and realistic narrative. Here is an example:

“Sarah said, ‘I can’t believe it’s already winter.’ Her friend replied that she loved the cold weather and was excited about the snowboarding season.”

In this example, direct speech is used to convey Sarah’s words, and indirect speech is used to convey her friend’s response.

Using Reported Questions

Reported questions are a form of indirect speech that convey a question someone asked without using quotation marks. Reported questions often use reporting verbs like “asked” or “wondered.” Here is an example:

“John asked if I had seen the movie last night.”

In this example, the question “Have you seen the movie last night?” is reported indirectly without using quotation marks.

Using Direct Speech to Convey Emotion

Direct speech can be used to convey emotion more effectively than indirect speech. When using direct speech to convey emotion, it’s important to choose the right tone and emphasis. Here is an example:

“She screamed, ‘I hate you!’ as she slammed the door.”

In this example, the use of direct speech and the exclamation mark convey the intense emotion of the moment.

  • When should I use direct speech?
  • Direct speech should be used when you want to report what someone said using their exact words. Direct speech is appropriate when you want to convey the speaker’s tone, emphasis, and emotion.
  • When should I use indirect speech?
  • Indirect speech should be used when you want to report what someone said using a paraphrased version of their words. Indirect speech is appropriate when you want to provide information without conveying the speaker’s tone, emphasis, or emotion.
  • What are some common reporting verbs?
  • Some common reporting verbs include “said,” “asked,” “exclaimed,” “whispered,” “wondered,” and “suggested.”

Direct and indirect speech are important tools for effective communication and writing. Understanding the differences between these two forms of speech and knowing how to use them effectively can take your writing to the next level. By using advanced techniques like blending direct and indirect speech and using direct speech to convey emotion, you can create engaging and realistic narratives that resonate with your readers.

Related Posts

What Is The Parts of Speech? Definitions, Types & Examples

What Is The Parts of Speech? Definitions, Types & Examples

Simple Subject and Predicate Examples With Answers

Simple Subject and Predicate Examples With Answers

100 Question Tags Examples with Answers

100 Question Tags Examples with Answers

What is WH Question Words? Definition and Examples

What is WH Question Words? Definition and Examples

List of Contractions in English With Examples

List of Contractions in English With Examples

14 Punctuation Marks With Examples

14 Punctuation Marks With Examples

Add comment cancel reply.

Save my name, email, and website in this browser for the next time I comment.

Games4esl logo

100 Reported Speech Examples: How To Change Direct Speech Into Indirect Speech

Reported speech, also known as indirect speech, is a way of communicating what someone else has said without quoting their exact words. For example, if your friend said, “ I am going to the store ,” in reported speech, you might convey this as, “ My friend said he was going to the store. ” Reported speech is common in both spoken and written language, especially in storytelling, news reporting, and everyday conversations.

Reported speech can be quite challenging for English language learners because in order to change direct speech into reported speech, one must change the perspective and tense of what was said by the original speaker or writer. In this guide, we will explain in detail how to change direct speech into indirect speech and provide lots of examples of reported speech to help you understand. Here are the key aspects of converting direct speech into reported speech.

Reported Speech: Changing Pronouns

Pronouns are usually changed to match the perspective of the person reporting the speech. For example, “I” in direct speech may become “he” or “she” in reported speech, depending on the context. Here are some example sentences:

  • Direct : “I am going to the park.” Reported : He said he was going to the park .
  • Direct : “You should try the new restaurant.” Reported : She said that I should try the new restaurant.
  • Direct : “We will win the game.” Reported : They said that they would win the game.
  • Direct : “She loves her new job.” Reported : He said that she loves her new job.
  • Direct : “He can’t come to the party.” Reported : She said that he couldn’t come to the party.
  • Direct : “It belongs to me.” Reported : He said that it belonged to him .
  • Direct : “They are moving to a new city.” Reported : She said that they were moving to a new city.
  • Direct : “You are doing a great job.” Reported : He told me that I was doing a great job.
  • Direct : “I don’t like this movie.” Reported : She said that she didn’t like that movie.
  • Direct : “We have finished our work.” Reported : They said that they had finished their work.
  • Direct : “You will need to sign here.” Reported : He said that I would need to sign there.
  • Direct : “She can solve the problem.” Reported : He said that she could solve the problem.
  • Direct : “He was not at home yesterday.” Reported : She said that he had not been at home the day before.
  • Direct : “It is my responsibility.” Reported : He said that it was his responsibility.
  • Direct : “We are planning a surprise.” Reported : They said that they were planning a surprise.

Reported Speech: Reporting Verbs

In reported speech, various reporting verbs are used depending on the nature of the statement or the intention behind the communication. These verbs are essential for conveying the original tone, intent, or action of the speaker. Here are some examples demonstrating the use of different reporting verbs in reported speech:

  • Direct: “I will help you,” she promised . Reported: She promised that she would help me.
  • Direct: “You should study harder,” he advised . Reported: He advised that I should study harder.
  • Direct: “I didn’t take your book,” he denied . Reported: He denied taking my book .
  • Direct: “Let’s go to the cinema,” she suggested . Reported: She suggested going to the cinema .
  • Direct: “I love this song,” he confessed . Reported: He confessed that he loved that song.
  • Direct: “I haven’t seen her today,” she claimed . Reported: She claimed that she hadn’t seen her that day.
  • Direct: “I will finish the project,” he assured . Reported: He assured me that he would finish the project.
  • Direct: “I’m not feeling well,” she complained . Reported: She complained of not feeling well.
  • Direct: “This is how you do it,” he explained . Reported: He explained how to do it.
  • Direct: “I saw him yesterday,” she stated . Reported: She stated that she had seen him the day before.
  • Direct: “Please open the window,” he requested . Reported: He requested that I open the window.
  • Direct: “I can win this race,” he boasted . Reported: He boasted that he could win the race.
  • Direct: “I’m moving to London,” she announced . Reported: She announced that she was moving to London.
  • Direct: “I didn’t understand the instructions,” he admitted . Reported: He admitted that he didn’t understand the instructions.
  • Direct: “I’ll call you tonight,” she promised . Reported: She promised to call me that night.

Reported Speech: Tense Shifts

When converting direct speech into reported speech, the verb tense is often shifted back one step in time. This is known as the “backshift” of tenses. It’s essential to adjust the tense to reflect the time elapsed between the original speech and the reporting. Here are some examples to illustrate how different tenses in direct speech are transformed in reported speech:

  • Direct: “I am eating.” Reported: He said he was eating.
  • Direct: “They will go to the park.” Reported: She mentioned they would go to the park.
  • Direct: “We have finished our homework.” Reported: They told me they had finished their homework.
  • Direct: “I do my exercises every morning.” Reported: He explained that he did his exercises every morning.
  • Direct: “She is going to start a new job.” Reported: He heard she was going to start a new job.
  • Direct: “I can solve this problem.” Reported: She said she could solve that problem.
  • Direct: “We are visiting Paris next week.” Reported: They said they were visiting Paris the following week.
  • Direct: “I will be waiting outside.” Reported: He stated he would be waiting outside.
  • Direct: “They have been studying for hours.” Reported: She mentioned they had been studying for hours.
  • Direct: “I can’t understand this chapter.” Reported: He complained that he couldn’t understand that chapter.
  • Direct: “We were planning a surprise.” Reported: They told me they had been planning a surprise.
  • Direct: “She has to complete her assignment.” Reported: He said she had to complete her assignment.
  • Direct: “I will have finished the project by Monday.” Reported: She stated she would have finished the project by Monday.
  • Direct: “They are going to hold a meeting.” Reported: She heard they were going to hold a meeting.
  • Direct: “I must leave.” Reported: He said he had to leave.

Reported Speech: Changing Time and Place References

When converting direct speech into reported speech, references to time and place often need to be adjusted to fit the context of the reported speech. This is because the time and place relative to the speaker may have changed from the original statement to the time of reporting. Here are some examples to illustrate how time and place references change:

  • Direct: “I will see you tomorrow .” Reported: He said he would see me the next day .
  • Direct: “We went to the park yesterday .” Reported: They said they went to the park the day before .
  • Direct: “I have been working here since Monday .” Reported: She mentioned she had been working there since Monday .
  • Direct: “Let’s meet here at noon.” Reported: He suggested meeting there at noon.
  • Direct: “I bought this last week .” Reported: She said she had bought it the previous week .
  • Direct: “I will finish this by tomorrow .” Reported: He stated he would finish it by the next day .
  • Direct: “She will move to New York next month .” Reported: He heard she would move to New York the following month .
  • Direct: “They were at the festival this morning .” Reported: She said they were at the festival that morning .
  • Direct: “I saw him here yesterday.” Reported: She mentioned she saw him there the day before.
  • Direct: “We will return in a week .” Reported: They said they would return in a week .
  • Direct: “I have an appointment today .” Reported: He said he had an appointment that day .
  • Direct: “The event starts next Friday .” Reported: She mentioned the event starts the following Friday .
  • Direct: “I lived in Berlin two years ago .” Reported: He stated he had lived in Berlin two years before .
  • Direct: “I will call you tonight .” Reported: She said she would call me that night .
  • Direct: “I was at the office yesterday .” Reported: He mentioned he was at the office the day before .

Reported Speech: Question Format

When converting questions from direct speech into reported speech, the format changes significantly. Unlike statements, questions require rephrasing into a statement format and often involve the use of introductory verbs like ‘asked’ or ‘inquired’. Here are some examples to demonstrate how questions in direct speech are converted into statements in reported speech:

  • Direct: “Are you coming to the party?” Reported: She asked if I was coming to the party.
  • Direct: “What time is the meeting?” Reported: He inquired what time the meeting was.
  • Direct: “Why did you leave early?” Reported: They wanted to know why I had left early.
  • Direct: “Can you help me with this?” Reported: She asked if I could help her with that.
  • Direct: “Where did you buy this?” Reported: He wondered where I had bought that.
  • Direct: “Who is going to the concert?” Reported: They asked who was going to the concert.
  • Direct: “How do you solve this problem?” Reported: She questioned how to solve that problem.
  • Direct: “Is this the right way to the station?” Reported: He inquired whether it was the right way to the station.
  • Direct: “Do you know her name?” Reported: They asked if I knew her name.
  • Direct: “Why are they moving out?” Reported: She wondered why they were moving out.
  • Direct: “Have you seen my keys?” Reported: He asked if I had seen his keys.
  • Direct: “What were they talking about?” Reported: She wanted to know what they had been talking about.
  • Direct: “When will you return?” Reported: He asked when I would return.
  • Direct: “Can she drive a manual car?” Reported: They inquired if she could drive a manual car.
  • Direct: “How long have you been waiting?” Reported: She asked how long I had been waiting.

Reported Speech: Omitting Quotation Marks

In reported speech, quotation marks are not used, differentiating it from direct speech which requires them to enclose the spoken words. Reported speech summarizes or paraphrases what someone said without the need for exact wording. Here are examples showing how direct speech with quotation marks is transformed into reported speech without them:

  • Direct: “I am feeling tired,” she said. Reported: She said she was feeling tired.
  • Direct: “We will win the game,” he exclaimed. Reported: He exclaimed that they would win the game.
  • Direct: “I don’t like apples,” the boy declared. Reported: The boy declared that he didn’t like apples.
  • Direct: “You should visit Paris,” she suggested. Reported: She suggested that I should visit Paris.
  • Direct: “I will be late,” he warned. Reported: He warned that he would be late.
  • Direct: “I can’t believe you did that,” she expressed in surprise. Reported: She expressed her surprise that I had done that.
  • Direct: “I need help with this task,” he admitted. Reported: He admitted that he needed help with the task.
  • Direct: “I have never been to Italy,” she confessed. Reported: She confessed that she had never been to Italy.
  • Direct: “We saw a movie last night,” they mentioned. Reported: They mentioned that they saw a movie the night before.
  • Direct: “I am learning to play the piano,” he revealed. Reported: He revealed that he was learning to play the piano.
  • Direct: “You must finish your homework,” she instructed. Reported: She instructed that I must finish my homework.
  • Direct: “I will call you tomorrow,” he promised. Reported: He promised that he would call me the next day.
  • Direct: “I have finished my assignment,” she announced. Reported: She announced that she had finished her assignment.
  • Direct: “I cannot attend the meeting,” he apologized. Reported: He apologized for not being able to attend the meeting.
  • Direct: “I don’t remember where I put it,” she confessed. Reported: She confessed that she didn’t remember where she put it.

Reported Speech Quiz

Thanks for reading! I hope you found these reported speech examples useful. Before you go, why not try this Reported Speech Quiz and see if you can change indirect speech into reported speech?

Cambridge Dictionary

  • Cambridge Dictionary +Plus

Reported speech

Reported speech is how we represent the speech of other people or what we ourselves say. There are two main types of reported speech: direct speech and indirect speech.

Direct speech repeats the exact words the person used, or how we remember their words:

Barbara said, “I didn’t realise it was midnight.”

In indirect speech, the original speaker’s words are changed.

Barbara said she hadn’t realised it was midnight .

In this example, I becomes she and the verb tense reflects the fact that time has passed since the words were spoken: didn’t realise becomes hadn’t realised .

Indirect speech focuses more on the content of what someone said rather than their exact words:

“I’m sorry,” said Mark. (direct)
Mark apologised . (indirect: report of a speech act)

In a similar way, we can report what people wrote or thought:

‘I will love you forever,’ he wrote, and then posted the note through Alice’s door. (direct report of what someone wrote)
He wrote that he would love her forever , and then posted the note through Alice’s door. (indirect report of what someone wrote)
I need a new direction in life , she thought. (direct report of someone’s thoughts)
She thought that she needed a new direction in life . (indirect report of someone’s thoughts)

Reported speech: direct speech

Reported speech: indirect speech

Reported speech: reporting and reported clauses

Speech reports consist of two parts: the reporting clause and the reported clause. The reporting clause includes a verb such as say, tell, ask, reply, shout , usually in the past simple, and the reported clause includes what the original speaker said.

Reported speech: punctuation

Direct speech.

In direct speech we usually put a comma between the reporting clause and the reported clause. The words of the original speaker are enclosed in inverted commas, either single (‘…’) or double (“…”). If the reported clause comes first, we put the comma inside the inverted commas:

“ I couldn’t sleep last night, ” he said.
Rita said, ‘ I don’t need you any more. ’

If the direct speech is a question or exclamation, we use a question mark or exclamation mark, not a comma:

‘Is there a reason for this ? ’ she asked.
“I hate you ! ” he shouted.

We sometimes use a colon (:) between the reporting clause and the reported clause when the reporting clause is first:

The officer replied: ‘It is not possible to see the General. He’s busy.’

Punctuation

Indirect speech

In indirect speech it is more common for the reporting clause to come first. When the reporting clause is first, we don’t put a comma between the reporting clause and the reported clause. When the reporting clause comes after the reported clause, we use a comma to separate the two parts:

She told me they had left her without any money.
Not: She told me, they had left her without any money .
Nobody had gone in or out during the previous hour, he informed us.

We don’t use question marks or exclamation marks in indirect reports of questions and exclamations:

He asked me why I was so upset.
Not: He asked me why I was so upset?

Reported speech: reporting verbs

Say and tell.

We can use say and tell to report statements in direct speech, but say is more common. We don’t always mention the person being spoken to with say , but if we do mention them, we use a prepositional phrase with to ( to me, to Lorna ):

‘I’ll give you a ring tomorrow,’ she said .
‘Try to stay calm,’ she said to us in a low voice.
Not: ‘Try to stay calm,’ she said us in a low voice .

With tell , we always mention the person being spoken to; we use an indirect object (underlined):

‘Enjoy yourselves,’ he told them .
Not: ‘Enjoy yourselves,’ he told .

In indirect speech, say and tell are both common as reporting verbs. We don’t use an indirect object with say , but we always use an indirect object (underlined) with tell :

He said he was moving to New Zealand.
Not: He said me he was moving to New Zealand .
He told me he was moving to New Zealand.
Not: He told he was moving to New Zealand .

We use say , but not tell , to report questions:

‘Are you going now?’ she said .
Not: ‘Are you going now?’ she told me .

We use say , not tell , to report greetings, congratulations and other wishes:

‘Happy birthday!’ she said .
Not: Happy birthday!’ she told me .
Everyone said good luck to me as I went into the interview.
Not: Everyone told me good luck …

Say or tell ?

Other reporting verbs

The reporting verbs in this list are more common in indirect reports, in both speaking and writing:

Simon admitted that he had forgotten to email Andrea.
Louis always maintains that there is royal blood in his family.
The builder pointed out that the roof was in very poor condition.

Most of the verbs in the list are used in direct speech reports in written texts such as novels and newspaper reports. In ordinary conversation, we don’t use them in direct speech. The reporting clause usually comes second, but can sometimes come first:

‘Who is that person?’ she asked .
‘It was my fault,’ he confessed .
‘There is no cause for alarm,’ the Minister insisted .

Verb patterns: verb + that -clause

{{randomImageQuizHook.quizId}}

Word of the Day

veterinary surgeon

Your browser doesn't support HTML5 audio

formal for vet

Dead ringers and peas in pods (Talking about similarities, Part 2)

Dead ringers and peas in pods (Talking about similarities, Part 2)

words used in indirect speech

Learn more with +Plus

  • Recent and Recommended {{#preferredDictionaries}} {{name}} {{/preferredDictionaries}}
  • Definitions Clear explanations of natural written and spoken English English Learner’s Dictionary Essential British English Essential American English
  • Grammar and thesaurus Usage explanations of natural written and spoken English Grammar Thesaurus
  • Pronunciation British and American pronunciations with audio English Pronunciation
  • English–Chinese (Simplified) Chinese (Simplified)–English
  • English–Chinese (Traditional) Chinese (Traditional)–English
  • English–Dutch Dutch–English
  • English–French French–English
  • English–German German–English
  • English–Indonesian Indonesian–English
  • English–Italian Italian–English
  • English–Japanese Japanese–English
  • English–Norwegian Norwegian–English
  • English–Polish Polish–English
  • English–Portuguese Portuguese–English
  • English–Spanish Spanish–English
  • English–Swedish Swedish–English
  • Dictionary +Plus Word Lists

Add ${headword} to one of your lists below, or create a new one.

{{message}}

Something went wrong.

There was a problem sending your report.

  • English Grammar
  • Reported Speech

Reported Speech - Definition, Rules and Usage with Examples

Reported speech or indirect speech is the form of speech used to convey what was said by someone at some point of time. This article will help you with all that you need to know about reported speech, its meaning, definition, how and when to use them along with examples. Furthermore, try out the practice questions given to check how far you have understood the topic.

words used in indirect speech

Table of Contents

Definition of reported speech, rules to be followed when using reported speech, table 1 – change of pronouns, table 2 – change of adverbs of place and adverbs of time, table 3 – change of tense, table 4 – change of modal verbs, tips to practise reported speech, examples of reported speech, check your understanding of reported speech, frequently asked questions on reported speech in english, what is reported speech.

Reported speech is the form in which one can convey a message said by oneself or someone else, mostly in the past. It can also be said to be the third person view of what someone has said. In this form of speech, you need not use quotation marks as you are not quoting the exact words spoken by the speaker, but just conveying the message.

Now, take a look at the following dictionary definitions for a clearer idea of what it is.

Reported speech, according to the Oxford Learner’s Dictionary, is defined as “a report of what somebody has said that does not use their exact words.” The Collins Dictionary defines reported speech as “speech which tells you what someone said, but does not use the person’s actual words.” According to the Cambridge Dictionary, reported speech is defined as “the act of reporting something that was said, but not using exactly the same words.” The Macmillan Dictionary defines reported speech as “the words that you use to report what someone else has said.”

Reported speech is a little different from direct speech . As it has been discussed already, reported speech is used to tell what someone said and does not use the exact words of the speaker. Take a look at the following rules so that you can make use of reported speech effectively.

  • The first thing you have to keep in mind is that you need not use any quotation marks as you are not using the exact words of the speaker.
  • You can use the following formula to construct a sentence in the reported speech.
  • You can use verbs like said, asked, requested, ordered, complained, exclaimed, screamed, told, etc. If you are just reporting a declarative sentence , you can use verbs like told, said, etc. followed by ‘that’ and end the sentence with a full stop . When you are reporting interrogative sentences, you can use the verbs – enquired, inquired, asked, etc. and remove the question mark . In case you are reporting imperative sentences , you can use verbs like requested, commanded, pleaded, ordered, etc. If you are reporting exclamatory sentences , you can use the verb exclaimed and remove the exclamation mark . Remember that the structure of the sentences also changes accordingly.
  • Furthermore, keep in mind that the sentence structure , tense , pronouns , modal verbs , some specific adverbs of place and adverbs of time change when a sentence is transformed into indirect/reported speech.

Transforming Direct Speech into Reported Speech

As discussed earlier, when transforming a sentence from direct speech into reported speech, you will have to change the pronouns, tense and adverbs of time and place used by the speaker. Let us look at the following tables to see how they work.

Here are some tips you can follow to become a pro in using reported speech.

  • Select a play, a drama or a short story with dialogues and try transforming the sentences in direct speech into reported speech.
  • Write about an incident or speak about a day in your life using reported speech.
  • Develop a story by following prompts or on your own using reported speech.

Given below are a few examples to show you how reported speech can be written. Check them out.

  • Santana said that she would be auditioning for the lead role in Funny Girl.
  • Blaine requested us to help him with the algebraic equations.
  • Karishma asked me if I knew where her car keys were.
  • The judges announced that the Warblers were the winners of the annual acapella competition.
  • Binsha assured that she would reach Bangalore by 8 p.m.
  • Kumar said that he had gone to the doctor the previous day.
  • Lakshmi asked Teena if she would accompany her to the railway station.
  • Jibin told me that he would help me out after lunch.
  • The police ordered everyone to leave from the bus stop immediately.
  • Rahul said that he was drawing a caricature.

Transform the following sentences into reported speech by making the necessary changes.

1. Rachel said, “I have an interview tomorrow.”

2. Mahesh said, “What is he doing?”

3. Sherly said, “My daughter is playing the lead role in the skit.”

4. Dinesh said, “It is a wonderful movie!”

5. Suresh said, “My son is getting married next month.”

6. Preetha said, “Can you please help me with the invitations?”

7. Anna said, “I look forward to meeting you.”

8. The teacher said, “Make sure you complete the homework before tomorrow.”

9. Sylvester said, “I am not going to cry anymore.”

10. Jade said, “My sister is moving to Los Angeles.”

Now, find out if you have answered all of them correctly.

1. Rachel said that she had an interview the next day.

2. Mahesh asked what he was doing.

3. Sherly said that her daughter was playing the lead role in the skit.

4. Dinesh exclaimed that it was a wonderful movie.

5. Suresh said that his son was getting married the following month.

6. Preetha asked if I could help her with the invitations.

7. Anna said that she looked forward to meeting me.

8. The teacher told us to make sure we completed the homework before the next day.

9. Sylvester said that he was not going to cry anymore.

10. Jade said that his sister was moving to Los Angeles.

What is reported speech?

What is the definition of reported speech.

Reported speech, according to the Oxford Learner’s Dictionary, is defined as “a report of what somebody has said that does not use their exact words.” The Collins Dictionary defines reported speech as “speech which tells you what someone said, but does not use the person’s actual words.” According to the Cambridge Dictionary, reported speech is defined as “the act of reporting something that was said, but not using exactly the same words.” The Macmillan Dictionary defines reported speech as “the words that you use to report what someone else has said.”

What is the formula of reported speech?

You can use the following formula to construct a sentence in the reported speech. Subject said that (report whatever the speaker said)

Give some examples of reported speech.

Given below are a few examples to show you how reported speech can be written.

Leave a Comment Cancel reply

Your Mobile number and Email id will not be published. Required fields are marked *

Request OTP on Voice Call

Post My Comment

words used in indirect speech

  • Share Share

Register with BYJU'S & Download Free PDFs

Register with byju's & watch live videos.

logo

What is Reported Speech and how to use it? with Examples

Reported speech and indirect speech are two terms that refer to the same concept, which is the act of expressing what someone else has said. Reported speech is different from direct speech because it does not use the speaker's exact words. Instead, the reporting verb is used to introduce the reported speech, and the tense and pronouns are changed to reflect the shift in perspective. There are two main types of reported speech: statements and questions. 1. Reported Statements: In reported statements, the reporting verb is usually "said." The tense in the reported speech changes from the present simple to the past simple, and any pronouns referring to the speaker or listener are changed to reflect the shift in perspective. For example, "I am going to the store," becomes "He said that he was going to the store." 2. Reported Questions: In reported questions, the reporting verb is usually "asked." The tense in the reported speech changes from the present simple to the past simple, and the word order changes from a question to a statement. For example, "What time is it?" becomes "She asked what time it was." It's important to note that the tense shift in reported speech depends on the context and the time of the reported speech. Here are a few more examples: ●  Direct speech: "I will call you later." Reported speech: He said that he would call me later. ●  Direct speech: "Did you finish your homework?" Reported speech: She asked if I had finished my homework. ●  Direct speech: "I love pizza." Reported speech: They said that they loved pizza.

When do we use reported speech?

Reported speech is used to report what someone else has said, thought, or written. It is often used in situations where you want to relate what someone else has said without quoting them directly. Reported speech can be used in a variety of contexts, such as in news reports, academic writing, and everyday conversation. Some common situations where reported speech is used include: News reports: Journalists often use reported speech to quote what someone said in an interview or press conference. Business and professional communication: In professional settings, reported speech can be used to summarize what was discussed in a meeting or to report feedback from a customer. Conversational English: In everyday conversations, reported speech is used to relate what someone else said. For example, "She told me that she was running late." Narration: In written narratives or storytelling, reported speech can be used to convey what a character said or thought.

How to make reported speech?

1. Change the pronouns and adverbs of time and place: In reported speech, you need to change the pronouns, adverbs of time and place to reflect the new speaker or point of view. Here's an example: Direct speech: "I'm going to the store now," she said. Reported speech: She said she was going to the store then. In this example, the pronoun "I" is changed to "she" and the adverb "now" is changed to "then." 2. Change the tense: In reported speech, you usually need to change the tense of the verb to reflect the change from direct to indirect speech. Here's an example: Direct speech: "I will meet you at the park tomorrow," he said. Reported speech: He said he would meet me at the park the next day. In this example, the present tense "will" is changed to the past tense "would." 3. Change reporting verbs: In reported speech, you can use different reporting verbs such as "say," "tell," "ask," or "inquire" depending on the context of the speech. Here's an example: Direct speech: "Did you finish your homework?" she asked. Reported speech: She asked if I had finished my homework. In this example, the reporting verb "asked" is changed to "said" and "did" is changed to "had." Overall, when making reported speech, it's important to pay attention to the verb tense and the changes in pronouns, adverbs, and reporting verbs to convey the original speaker's message accurately.

How do I change the pronouns and adverbs in reported speech?

1. Changing Pronouns: In reported speech, the pronouns in the original statement must be changed to reflect the perspective of the new speaker. Generally, the first person pronouns (I, me, my, mine, we, us, our, ours) are changed according to the subject of the reporting verb, while the second and third person pronouns (you, your, yours, he, him, his, she, her, hers, it, its, they, them, their, theirs) are changed according to the object of the reporting verb. For example: Direct speech: "I love chocolate." Reported speech: She said she loved chocolate. Direct speech: "You should study harder." Reported speech: He advised me to study harder. Direct speech: "She is reading a book." Reported speech: They noticed that she was reading a book. 2. Changing Adverbs: In reported speech, the adverbs and adverbial phrases that indicate time or place may need to be changed to reflect the perspective of the new speaker. For example: Direct speech: "I'm going to the cinema tonight." Reported speech: She said she was going to the cinema that night. Direct speech: "He is here." Reported speech: She said he was there. Note that the adverb "now" usually changes to "then" or is omitted altogether in reported speech, depending on the context. It's important to keep in mind that the changes made to pronouns and adverbs in reported speech depend on the context and the perspective of the new speaker. With practice, you can become more comfortable with making these changes in reported speech.

How do I change the tense in reported speech?

In reported speech, the tense of the reported verb usually changes to reflect the change from direct to indirect speech. Here are some guidelines on how to change the tense in reported speech: Present simple in direct speech changes to past simple in reported speech. For example: Direct speech: "I like pizza." Reported speech: She said she liked pizza. Present continuous in direct speech changes to past continuous in reported speech. For example: Direct speech: "I am studying for my exam." Reported speech: He said he was studying for his exam. Present perfect in direct speech changes to past perfect in reported speech. For example: Direct speech: "I have finished my work." Reported speech: She said she had finished her work. Past simple in direct speech changes to past perfect in reported speech. For example: Direct speech: "I visited my grandparents last weekend." Reported speech: She said she had visited her grandparents the previous weekend. Will in direct speech changes to would in reported speech. For example: Direct speech: "I will help you with your project." Reported speech: He said he would help me with my project. Can in direct speech changes to could in reported speech. For example: Direct speech: "I can speak French." Reported speech: She said she could speak French. Remember that the tense changes in reported speech depend on the tense of the verb in the direct speech, and the tense you use in reported speech should match the time frame of the new speaker's perspective. With practice, you can become more comfortable with changing the tense in reported speech.

Do I always need to use a reporting verb in reported speech?

No, you do not always need to use a reporting verb in reported speech. However, using a reporting verb can help to clarify who is speaking and add more context to the reported speech. In some cases, the reported speech can be introduced by phrases such as "I heard that" or "It seems that" without using a reporting verb. For example: Direct speech: "I'm going to the cinema tonight." Reported speech with a reporting verb: She said she was going to the cinema tonight. Reported speech without a reporting verb: It seems that she's going to the cinema tonight. However, it's important to note that using a reporting verb can help to make the reported speech more formal and accurate. When using reported speech in academic writing or journalism, it's generally recommended to use a reporting verb to make the reporting more clear and credible. Some common reporting verbs include say, tell, explain, ask, suggest, and advise. For example: Direct speech: "I think we should invest in renewable energy." Reported speech with a reporting verb: She suggested that they invest in renewable energy. Overall, while using a reporting verb is not always required, it can be helpful to make the reported speech more clear and accurate.

How to use reported speech to report questions and commands?

1. Reporting Questions: When reporting questions, you need to use an introductory phrase such as "asked" or "wondered" followed by the question word (if applicable), subject, and verb. You also need to change the word order to make it a statement. Here's an example: Direct speech: "What time is the meeting?" Reported speech: She asked what time the meeting was. Note that the question mark is not used in reported speech. 2. Reporting Commands: When reporting commands, you need to use an introductory phrase such as "ordered" or "told" followed by the person, to + infinitive, and any additional information. Here's an example: Direct speech: "Clean your room!" Reported speech: She ordered me to clean my room. Note that the exclamation mark is not used in reported speech. In both cases, the tense of the reported verb should be changed accordingly. For example, present simple changes to past simple, and future changes to conditional. Here are some examples: Direct speech: "Will you go to the party with me?" Reported speech: She asked if I would go to the party with her. Direct speech: "Please bring me a glass of water." Reported speech: She requested that I bring her a glass of water. Remember that when using reported speech to report questions and commands, the introductory phrases and verb tenses are important to convey the intended meaning accurately.

How to make questions in reported speech?

To make questions in reported speech, you need to use an introductory phrase such as "asked" or "wondered" followed by the question word (if applicable), subject, and verb. You also need to change the word order to make it a statement. Here are the steps to make questions in reported speech: Identify the reporting verb: The first step is to identify the reporting verb in the sentence. Common reporting verbs used to report questions include "asked," "inquired," "wondered," and "wanted to know." Change the tense and pronouns: Next, you need to change the tense and pronouns in the sentence to reflect the shift from direct to reported speech. The tense of the verb is usually shifted back one tense (e.g. from present simple to past simple) in reported speech. The pronouns should also be changed as necessary to reflect the shift in perspective from the original speaker to the reporting speaker. Use an appropriate question word: If the original question contained a question word (e.g. who, what, where, when, why, how), you should use the same question word in the reported question. If the original question did not contain a question word, you can use "if" or "whether" to introduce the reported question. Change the word order: In reported speech, the word order of the question changes from the inverted form to a normal statement form. The subject usually comes before the verb, unless the original question started with a question word. Here are some examples of reported questions: Direct speech: "What time is the meeting?" Reported speech: She asked what time the meeting was. Direct speech: "Did you finish your homework?" Reported speech: He wanted to know if I had finished my homework. Direct speech: "Where are you going?" Reported speech: She wondered where I was going. Remember that when making questions in reported speech, the introductory phrases and verb tenses are important to convey the intended meaning accurately. Here you can find more examples of direct and indirect questions

What is the difference between reported speech an indirect speech?

In reported or indirect speech, you are retelling or reporting what someone said using your own words. The tense of the reported speech is usually shifted back one tense from the tense used in the original statement. For example, if someone said, "I am going to the store," in reported speech you would say, "He/she said that he/she was going to the store." The main difference between reported speech and indirect speech is that reported speech usually refers to spoken language, while indirect speech can refer to both spoken and written language. Additionally, indirect speech is a broader term that includes reported speech as well as other ways of expressing what someone else has said, such as paraphrasing or summarizing.

Examples of direct speech to reported

1. Direct speech: "I am hungry," she said. Reported speech: She said she was hungry. 2. Direct speech: "Can you pass the salt, please?" he asked. Reported speech: He asked her to pass the salt. 3. Direct speech: "I will meet you at the cinema," he said. Reported speech: He said he would meet her at the cinema. 4. Direct speech: "I have been working on this project for hours," she said. Reported speech: She said she had been working on the project for hours. 5. Direct speech: "What time does the train leave?" he asked. Reported speech: He asked what time the train left. 6. Direct speech: "I love playing the piano," she said. Reported speech: She said she loved playing the piano. 7. Direct speech: "I am going to the grocery store," he said. Reported speech: He said he was going to the grocery store. 8. Direct speech: "Did you finish your homework?" the teacher asked. Reported speech: The teacher asked if he had finished his homework. 9. Direct speech: "I want to go to the beach," she said. Reported speech: She said she wanted to go to the beach. 10. Direct speech: "Do you need help with that?" he asked. Reported speech: He asked if she needed help with that. 11. Direct speech: "I can't come to the party," he said. Reported speech: He said he couldn't come to the party. 12. Direct speech: "Please don't leave me," she said. Reported speech: She begged him not to leave her. 13. Direct speech: "I have never been to London before," he said. Reported speech: He said he had never been to London before. 14. Direct speech: "Where did you put my phone?" she asked. Reported speech: She asked where she had put her phone. 15. Direct speech: "I'm sorry for being late," he said. Reported speech: He apologized for being late. 16. Direct speech: "I need some help with this math problem," she said. Reported speech: She said she needed some help with the math problem. 17. Direct speech: "I am going to study abroad next year," he said. Reported speech: He said he was going to study abroad the following year. 18. Direct speech: "Can you give me a ride to the airport?" she asked. Reported speech: She asked him to give her a ride to the airport. 19. Direct speech: "I don't know how to fix this," he said. Reported speech: He said he didn't know how to fix it. 20. Direct speech: "I hate it when it rains," she said. Reported speech: She said she hated it when it rained.

What is Direct and Indirect Speech?

Direct and indirect speech are two different ways of reporting spoken or written language. Let's delve into the details and provide some examples. Click here to read more

Fluent English Grammar

Created by Fluent English Grammar

Privacy Policy

Terms of Service

English With Janet

  • Practical Tips
  • Spoken English

search

Direct To Indirect Speech: Complete Rules With Examples

Blog 1 Direct To Indirect Speech Complete Rules With Examples

Direct and indirect speech is often a confusing topic for English learners. The basic idea is this:

  • In direct speech, we quote a person’s exact words. For example, Meera said, “I can speak English fluently.”
  • In indirect speech, we do not quote the person’s exact words but provide a summary of what was said. For example, Meera said that she could speak English fluently.

The critical difference is that direct speech uses the exact words spoken by a person, while indirect speech summarizes what was said. While the definition is simple, the challenge for English language learners is using the proper tenses when converting a phrase from direct to indirect and vice versa.

Why Should You Learn Direct To Indirect Speech Rules?

There are several occasions – in your professional and personal – where you might need to describe an action or event to others. For example, you might have to repeat the team leader’s instructions to your teammates at the workplace. In this scenario, you convert your team leader’s direct to indirect speech.

Knowing conversion rules can help you present or describe the event correctly without making any grammatical errors or spoken English blunders.

In this post, we walk you through the rules of converting direct to indirect speech, helping you speak English fluently online and offline.

How To Use Direct Speech?

The rule is simple: Use direct speech when you want to repeat what someone says as it is, and ensure that the spoken text is sandwiched between quotation (speech) marks.

John said, “I want to learn to speak English fluently.”

It’s common to see the direct speech in newspaper articles and books. For example,

The District Collector announced, “The Chief Minister will inaugurate the city centre next week.”

As you can notice, in direct speech, we use the verb say (said in the past tense) to denote what was spoken. You can also use related verbs like ‘asked,’ ‘replied,’ ‘told,’ ‘informed,’ ‘shouted,’ etc.

How To Use Indirect Speech?

Indirect speech is also reported speech, as we use it to inform/repeat what someone else said. Using the two examples above, we can convert it into indirect speech as follows:

John said that he wanted to learn to speak English fluently.

The District Collector announced that the Chief Minister would inaugurate the city centre the week after.

Another example,

Direct Speech: “I feel cold.”

Indirect Speech: She says that she feels cold.

If you notice these examples carefully, you can see that the tense changes when converting from direct to indirect speech. To illustrate this point, in the following example, direct speech is in the present simple tense, while indirect speech is written in the simple past tense.

Direct Speech: “I live in the city centre.”

Indirect Speech: He said he lived in the city centre.

Tense Change Rules: Direct To Indirect Speech

Similarly, other tenses follow similar rules when changing from direct to indirect speech. Use the following table to help you better understand the tense change rules:

Modal Verbs: Direct To Indirect Speech

When converting direct to indirect speech, you must change modal verbs accordingly. Here are a few examples to help you understand better:

Changing Time Expressions: Direct To Indirect Speech

Sometimes it becomes necessary to change the time expressions when converting from direct to indirect speech. A few examples,

  • Direct speech: Sheila said, “I am meeting my brother tomorrow.”
  • Indirect speech: Sheila said that she was meeting her brother the following day.

Here are a few examples of other typical time expressions and how they change:

Changing Place Expressions: Direct To Indirect Speech

Like time expressions, you might also have to change words representing places when reporting indirect speech. For example,

  • Direct speech: “It’s raining here.”
  • Indirect speech: She said that it was raining there.

Here are a few examples of other common place expressions and how they change:

However, the place words only change when you report something from a different location.

Over To You

Now that you’ve seen the rules to convert direct to indirect speech, it’s time to put them into practice. The most efficient way to improve English speaking is to practice what you’ve learned. Join online English-speaking practice classes to gain confidence and mastery in your daily conversations.

Recent Posts

How to sign a document using a signature generator, how virtual platforms are revolutionizing language learning, como usar uma calculadora martingale para melhorar suas estratégias de opções binárias, 35 positive words to describe your best female friend, 20 stylish and popular idioms related to success.

  • January 2024
  • December 2023
  • November 2023
  • August 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022

Related Posts

Broadening Your Vocabulary: Unlocking 40 Alternative Terms for Important:

Broadening Your Vocabulary: Unlocking 40 Altern...

Broadening Your Vocabulary: Unlocking 40 Alternative Terms for Important:

Using Instagram To Learn English: Top 10 Accoun...

Using Instagram To Learn English: Top 10 Accounts To Follow

50 Commonly Used English Abbreviations For Text...

50 Commonly Used English Abbreviations For Texting (With Examples)

Never Miss a Blog

Enter your Email ID below to get our insightful blogs into your inbox.

scroll

Top English Grammar

Your way to the top of Grammar...

Indirect Speech: Formula and Rules

TopEnglishGrammar

  • July 3, 2021

We are talking about a very important and interesting topic. We are talking about direct and indirect speech in English and what is the correct formula of the usage.

Remember to read How to learn English with audiobooks for FREE

This topic can seem complicated at the beginning, but necessary to learn. Having this topic solved, you improve your English to a new level, so let’s start to deal with it.

What are Direct and Indirect speech?

In English, there are two ways how we can tell what another person said. Two ways you can say what someone else has said before.

  • Direct Speech
  • Indirect (Reported) Speech

Note : Indirect speech in different textbooks can be called differently: Indirect Speech or Reported Speech . But these two names mean the same.

Indirect Speech = Reported Speech

The infographic shows that there is no difference between the terms indirect speech and reported speech.

Direct speech in English is a type of speech when we retell someone’s speech as it was. We don’t change anything.

John says: I’m a good boy.

To tell what John said, we will say:

We say: John said, “I’m a good boy.”

Indirect speech differs from direct speech in that we DO NOT tell exactly what another person said. We are NOT repeating what someone else said. Indirect speech is when we tell the MEANING of what someone else said.

We say: John said he was a good boy.

Pay attention to what this sentence looks like. Earlier, when John said this, the sentence looked like this:

I am a good boy.

But after WE retell John’s words, in the indirect speech, this sentence looks like this:

John said he was a good boy.

The Quotes and the comma that stood after the name John, separating the speaker from his direct speech, disappeared from this sentence.

In indirect speech, we do not use the separating comma and quotation marks. Because now it is WE are retelling the meaning of what the other person (John) said.

The rule that we don't use the comma and quotation marks in indirect speech

In direct speech, the speaker most often speaks in the first person. That is, the speaker speaks from his person.

John will not talk about himself: John is a good boy . John will say it on his behalf: I am a good boy.

But when we retell the words of John (indirect speech), we cannot speak on his behalf. We cannot say “I am a good boy” because those are not our words. This is John a good boy.

Therefore, in indirect speech, we change “I” to the third person.

He says: I hate you but I need your help.
I retell: He said that he hated me but he needed my help.

To translate direct speech into indirect speech, we use certain rules that you should know.

Let’s take a look at these rules and formulas in order.

Quotation marks and comma

In direct speech, we use a comma to separate the speaker from what he is saying. Direct speech (what the speaker says) is in quotation marks.

When we translate direct speech into indirect speech, we remove quotes and commas.

Jessica says , “I’m from the future.”
We retell Jessica’s words: She said that she was from the future.

Personal and possessive pronouns

When translating direct speech into indirect speech, we change personal and possessive pronouns to third-person pronouns.

Direct Speech : He says, “ I couldn’t stay” Indirect Speech : He said that he couldn’t stay. Direct Speech : Tom says, “ I am deeply disturbed” Indirect Speech : Tom said that he was deeply disturbed.

Note: If in direct speech the speaker tells his own words, then we do not change personal and possessive pronouns.

Direct Speech: I said, “ I will do that” Indirect Speech: I said that I would do that.

Adverbs in direct speech

When we translate adverbs from direct speech to indirect, adverbs change their form.

You can see how adverbs look in direct speech and how adverbs look in indirect speech in this table:

The table shows how adverbs look in direct speech and how adverbs look in indirect speech.

But we don’t always change adverbs this way. We change adverbs only if, when translating from direct speech into indirect speech adverbs cannot express the same meaning as in direct speech.

Take a look at an example:

Mom says, “ Tomorrow we will go to Uncle John’s.” Mom said that the next day we would go to Uncle John’s.

In these examples, we have replaced the adverb tomorrow with the next day . Because we retell Mom’s words on another day. We cannot say tomorrow anymore.

Now look at another example:

Mom says, “We went to visit Uncle John yesterday .”

Now imagine that we are retelling this the next day. We have to say:

Mom said that we went to visit Uncle John the day before yesterday .

If we said “ yesterday “, it would change the meaning of what we want to tell.

If in direct speech in the main sentence the predicate is in Past Simple, then in indirect speech we use the agreement rules.

We put the conjunction “ that ” in front of indirect speech.

Note: We may not use the conjunction that after verbs such as:

He said he found it on the island. He thought he was better than me. He knew he could call you anytime.

The rule says we don't use that after some verbs like to think, to know, to say

Prepositional object

If in direct speech after the verb to say there is a prepositional object, then in order to translate such a sentence into indirect speech, we change the verb to say to tell . In this case, tell is used without the preposition to .

Incorrect : to tell Correct : tell

This means:

She said to me … changes to She told me that …

Note : Remember that in this case we also change the adverbs of place and time and demonstrative pronouns, if they are in direct speech.

Modal verbs

For modals, we use several important rules.

We change modal verbs as well as main verbs when moving from direct to indirect speech.

But we do not change all modal verbs. We leave some verbs in their original form.

Let’s talk about modals in more detail.

Modal verb must

If in direct speech the verb must means an obligation or command, then in the subordinate clause in indirect speech must does NOT change and looks like must .

The teacher says, “You must behave well in class.” The teacher said that we must behave well in class.

If in direct speech the verb must expresses the need, then in the subordinate clause in indirect speech we change the verb must to had to .

Mom says, “You must visit the doctor.” Mom said that I had to visit the doctor.

The past form of Modal verbs in indirect speech

Can and could..

We change the modal verb can in direct speech to could in indirect speech. Could is the past form of the modal verb can .

She says, “I can swim.” She said that she could swim.

May and might.

We change the modal verb may in direct speech to might in indirect speech. Might is the past form of the modal verb may .

John says, “I may propose to Maria.” John said that he might propose to Maria.

Must and had to.

We change the modal verb must in direct speech to had to in indirect speech (if the verb must expresses the need). Had to is the past analog of the modal verb must .

Two examples of using direct and indirect speech.

Modal verbs that do not change in indirect speech

The following verbs move from direct to indirect speech in their original form. They don’t change in any way.

  • must (if the verb must means an obligation or command)
He says, “I could do this.” He said he could do that.

Let’s take a closer look at these verbs:

The modal verb would in direct speech remains in the form would in indirect speech too.

Mom says, “I would bake a cake.” Mom said she would bake a cake.

If we use the modal verb could in direct speech, then we do not change this verb in any way in indirect speech. Because could is a past form already (It’s the past form of the modal verb can ).

John says, “I could learn to swim” John said he could learn to swim.

The modal verb might does not change its form when we translate this verb from direct to indirect speech. Because the modal might is the past form of the modal may .

He says, “I might ask the same question again”. He said that he might ask the same question again.

We do not change should when switching to indirect speech. Because should is considered the past form of the modal verb shall .

He says, “We should see Mr. Gannon” He said that we should see Mr. Gannon.

We do not change the modal verb OUGHT TO when translating this verb into indirect speech.

She says, “You ought to be angry with John” She said that I ought to be angry with John

Exceptions to the rules

Let’s talk about the important exceptions to the rules of this lesson.

  • We can exclude the word that out of affirmative sentences in indirect speech. Because in indirect speech in affirmative sentences, the meaning of the sentence does not change, regardless of whether we use that or not.
He said ( that ) he thought you seemed depressed. He said ( that ) there was no need. He said ( that ) he had many friends.
  • If in direct speech we are talking about a specific event that happened at exactly the specified time and did not happen anymore, then we translate the sentence into indirect speech without the agreement.
He says, “Gagarin went to space in 1961.” He said that Gagarin went to space in 1961.

The event that we are talking about in this example happened at exactly the specified time and did not happen anymore.

Rule and Two examples of using direct and indirect speech.

  • If in direct speech we use verbs such as:

then when translating into indirect speech, we do not change the form of these verbs. These verbs remain in their form.

She says, “We might find some treasure” She said that we might find some treasure.
He says, “I should do it”. He said that he should do it.
  • If indirect speech begins with the verb say or tell which is used in the form:
  • Present Simple
  • Present Perfect
  • Future Simple

then we translate such a sentence into indirect speech without changing the tense to the past:

She says, “I cook deliciously.” She says that she cooks deliciously. He says, “I have a new smartphone.” He says that he has a new smartphone. She will say, “I didn’t know it.” He will say (that) he didn’t know it.
  • If in direct speech we are talking about a well-known fact or law of nature, then we do not transfer to the past such a fact or the law of nature when translating from direct speech to indirect.
He says, “After winter comes spring.” He said that after winter comes spring. She says, “Lions don’t hunt camels.” She said that lions don’t hunt camels.
  • If in direct speech we use tenses:
  • Past Continuous
  • Past Perfect
  • Past Perfect Continuous

then when translating into indirect speech, we do not change the sentence, we do not translate the sentence into the past.

He says, “I had fixed my car.” He said he had fixed his car. He says, “I was skiing .” He said he was skiing . He says, “I had been all alone for a very long time”. He said that he had been all alone for a very long time.

Interrogative (question) sentences in indirect speech

Look at the following rules and nuances to know how to correctly translate interrogative (question) sentences from direct speech to indirect speech:

  • When we translate a general question into indirect speech, we put one of the conjunctions between the main sentence and the question:
He asks, “Do you play dominoes?” He asked if I played dominoes. He asked whether I played dominoes.

The use of conjunctions if and whether

  • If we translate an interrogative sentence from direct speech to indirect speech, then we change the interrogative word order to direct word order.

We remove the auxiliary verb that was used in the interrogative sentence. We put the subject before the predicate as it should be for the direct word order.

He asks, “Where are you going?” He asked where I was going.
  • If in an indirect sentence we ask a question using the verb say and if there is no indirect object in the main sentence, then we change the verb say to one of these words:
  • want to know
She asks, “Where you are?” She wanted to know where you were.
  • When translating an interrogative sentence from direct speech into indirect speech, we change all pronouns, verbs, adverbs of place, adverbs of time.
She asks, “What do these letters mean?” She asked what those letters mean.

Special questions in indirect speech

Special questions (or Wh-questions) are questions that begin with an additional, question word.

In indirect speech, such a question should also begin with a question word.

This question word also serves as conjunction. This word attaches the question part to the main sentence.

In the question part, we use direct word order.

At the same time, we comply with all the rules for the Sequence of tenses.

My dad asks, “What do you plan to do with yourself?” My dad asked what I planned to do with myself.

Imperative sentences in indirect speech

When translating imperative sentences from direct to indirect speech, we must take into account several nuances:

  • Orders in indirect speech look like this:
He said, “ Go now!” He said to go then. She says, “ Carry my bag” She asked to carry her bag.

We use the verb to say when we translate an ordinary sentence into indirect speech. But in imperative sentences, we change the verb to say to a verb that expresses an order or request:

She says , “Carry my bag” She asked to carry her bag.

The infographic shows how we use imperative sentences in indirect speech

  • In direct speech in the imperative mood, we often use:

let’s (let us)

let’s encourage the speaker and the person to do something together.

In indirect speech, we change let’s to to suggest . For example:

She says, “ let’s do that!” She suggested to do that.
  • In indirect speech, we put a noun after the verb that expresses an order or request. The noun is the one to whom this request or order is addressed. Then we use the infinitive.
She says, “Replace him, John “ She asked John to replace him.
  • We can strengthen the request or order in indirect speech if we add verbs such as:
  • to recommend
  • to urge etc.
She says , “Read this book” She ordered ( advised, recommend ) me to read that book.
  • In order to make a negative imperative sentence in direct speech, we need:

not + infinitive

He says, “Don’t cry.” He said to me not to cry.
  • In direct speech, we often do not name the person to whom the order or request is addressed. But when translating an imperative sentence from direct speech to indirect speech, we must indicate the one to whom the order or request is addressed.

For this, we use a noun or a pronoun.

She says, “Speak to him!” She asked me to speak to you.

Present and future tense in indirect speech

Most often, we translate the future and the present into the past.

He says, “I have two brothers” He says that he had two brothers She says, “I do this every time” She says that he did that every day. He says, “I write books” He says that he wrote books. She says, “I am reading” She said that she was reading. He says, “I can swim” He said that he could swim. He says, “I will help you” He said that he would help me.

Past tense in indirect speech

When we translate a sentence written in the past into indirect speech, we can leave it unchanged or we can change the past to the Past Perfect.

He says, “I saw this movie” He said that he saw that movie. He said that he had seen that movie.

What if in direct speech the main verb is already in Past Perfect?

In this case, the verb in Past Perfect remains unchanged. The verb in Past Perfect in direct speech remains in Past Perfect in indirect speech too.

He says, “I had bought I new house” He said that he had bought a new house.

I live in Ukraine. Now, this website is the only source of money I have. If you would like to thank me for the articles I wrote, you can click Buy me a coffee . Thank you! ❤❤❤

Recommended reading: Complex Sentence in English.

Related Posts

words used in indirect speech

Prepositions of Place in English Grammar

words used in indirect speech

Prepositions of Reason and Purpose

words used in indirect speech

Word Order in Interrogative Sentence

words used in indirect speech

Direct Word Order in English Sentences

words used in indirect speech

Difference Between IN ON AT as Prepositions of Place

words used in indirect speech

Zero Conditional In English

Trending now.

Thumbnail of Question to the subject

Englishfornoobs.com

English worksheets & lessons for beginners

Direct and Indirect speech: rules and examples

Direct and indirect speech with rules and examples, download all the grammar lessons in one click   $27   $19.

In English, to report someone’s words or their own words, you can use direct or indirect speech. These may include statements, questions, orders, advice…

When moving from direct to indirect style, it is often necessary to change personal pronouns, demonstrative and possessive pronouns according to who says what:

  • I  → he / she
  • me →  him / her
  • my →  his / her
  • this →  that
  • mine →  his / hers
  • ours →  theirs
  • our →  their

Here are some examples:

Note: That is often implied in indirect speech. It is not mandatory to use it, so it is indicated in brackets in this lesson.

Introductory verbs

To relate someone’s words to both direct and indirect speech, you need an introductory verb.

The two most frequent are tell and say, but there are many other possible ones like:

  • want to know 

Say or tell ?

Be careful to distinguish SAY from TELL . The two verbs may have the same meaning, but their use is different. With TELL, the interlocutor is quoted: the name or pronoun is placed immediately after tell (tell somebody something).

With SAY, the interlocutor is not necessarily quoted; if he is, he is introduced by the preposition to ( say something to somebody ):

  • He says (that) he is English. 
  • He tells me (that) he is English. 

However, tell is used in some expressions without mentioning a contact person:

  • tell the truth 
  • tell a story 
  • tell the time 

Note:  the wording ‘ He said to me… ‘ is possible but seems clumsy. It is best to use ‘ He told me… ‘.

TIMES MODIFICATIONS

The shift to indirect speech leads to changes in the tense, depending on whether the verb is in the present tense or in the past tense.

If the introductory verb is in the present tense, the tense (or modal) does not change. 

  • “I’m sorry.” → He says he is sorry. 
  • “I hate driving” → He says he hates driving.

Be careful, if the statements reported are still true now you must not change the tense!

  • He said this morning (that) he hates driving. (= He still hates driving now).

If the introductory verb is in the past, the verb tense changes:

Examples of major changes in time:

The modals could, should, would, might, needn’t, ought to, used to don’t change when used with indirect speech.

Those who change are will → would, can → could, may → might :

  • I will come with you. → Tina promised she would come with me. 
  • I can help you. → He said he could help me. 
  • It may be a good idea. → I thought it might be a good idea.

* do not change

TIME, PLACE AND DEMONSTRATIVE MARKERS

Expressions of time, place and demonstratives change if the context of indirect speech is different from that of direct speech.

She said “I saw him yesterday.” → She said she had seen him the day before. 

Orders and prohibitions to indirect speech

To relate an order or prohibition to indirect speech, verbs such as tell, order or forbid are used… Be careful, remember to replace Don’t by NOT when it is the main verb of the sentence!

For affirmative sentences, use to + infinitive

For negative sentences use not to + infinitive

  • Don’t worry! → He told her not to worry.
  • He said, “go to bed!” → He ordered the child to go to bed.
  • Don’t marry him! → She forbade me to marry him.
  • Please don’t be late. → She asked us not to be late.

Questions to the indirect speech

If there is an interrogative word like where/who/when/why… in direct speech, we keep it in indirect speech:

  • What are you doing? → She asked me what I was doing. 
  • Who was that beautifl woman? → He asked me who that beautiful woman had been.
  • Where do you live? → He wanted to know  where I lived.
  • “Why don’t you speak Spanish?” → He asked me why I didn’t speak Spanish.

If it is a closed-ended question or you have to answer yes/no, you use if or whether :

  • “Do you like chocolate?” → She asked me if I liked chocolate.
  • “Are you living here?” → She asked me if I was living here.
  • “Have you ever been to Paris?” → He asked me if I had ever been to Paris.

When the question contains a modal, it is preterite in the reported question:

  • How will he react? → He wondered how he would react.

Some examples of indirect questions:

  • I wondered what they were talking about.
  • I don’t know if they’ll come or not.

OTHER CHANGES

Expressions of advice such as must, should and ought are usually reported using the verbs advise or urge :

  • “You must read this book.” → He advised / urged me to read that book.

The expression let’s is usually reported using the verb suggest, with gerund or with should:

  • “Let’s go to the cinema.” → He suggested going to the cinema. OR He suggested that we should go to the cinema.

©Englishfornoobs.com

Tags: Grammar

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 26 April 2024

Online speech synthesis using a chronically implanted brain–computer interface in an individual with ALS

  • Miguel Angrick 1 ,
  • Shiyu Luo 2 ,
  • Qinwan Rabbani 3 ,
  • Daniel N. Candrea 2 ,
  • Samyak Shah 1 ,
  • Griffin W. Milsap 4 ,
  • William S. Anderson 5 ,
  • Chad R. Gordon 5 , 6 ,
  • Kathryn R. Rosenblatt 1 , 7 ,
  • Lora Clawson 1 ,
  • Donna C. Tippett 1 , 8 , 9 ,
  • Nicholas Maragakis 1 ,
  • Francesco V. Tenore 4 ,
  • Matthew S. Fifer 4 ,
  • Hynek Hermansky 10 , 11 ,
  • Nick F. Ramsey 12 &
  • Nathan E. Crone 1  

Scientific Reports volume  14 , Article number:  9617 ( 2024 ) Cite this article

7 Altmetric

Metrics details

  • Amyotrophic lateral sclerosis
  • Neuroscience

Brain–computer interfaces (BCIs) that reconstruct and synthesize speech using brain activity recorded with intracranial electrodes may pave the way toward novel communication interfaces for people who have lost their ability to speak, or who are at high risk of losing this ability, due to neurological disorders. Here, we report online synthesis of intelligible words using a chronically implanted brain-computer interface (BCI) in a man with impaired articulation due to ALS, participating in a clinical trial (ClinicalTrials.gov, NCT03567213) exploring different strategies for BCI communication. The 3-stage approach reported here relies on recurrent neural networks to identify, decode and synthesize speech from electrocorticographic (ECoG) signals acquired across motor, premotor and somatosensory cortices. We demonstrate a reliable BCI that synthesizes commands freely chosen and spoken by the participant from a vocabulary of 6 keywords previously used for decoding commands to control a communication board. Evaluation of the intelligibility of the synthesized speech indicates that 80% of the words can be correctly recognized by human listeners. Our results show that a speech-impaired individual with ALS can use a chronically implanted BCI to reliably produce synthesized words while preserving the participant’s voice profile, and provide further evidence for the stability of ECoG for speech-based BCIs.

Similar content being viewed by others

words used in indirect speech

The language network as a natural kind within the broader landscape of the human brain

words used in indirect speech

EEG is better left alone

words used in indirect speech

Complex activity and short-term plasticity of human cerebral organoids reciprocally connected with axons

Introduction.

A variety of neurological disorders, including amyotrophic lateral sclerosis (ALS), can severely affect speech production and other purposeful movements while sparing cognition. This can result in varying degrees of communication impairments, including Locked-In Syndrome (LIS) 1 , 2 , in which patients can only answer yes/no questions or select from sequentially presented options using eyeblinks, eye movements, or other residual movements. Individuals such as these may use augmentative and alternative technologies (AAT) to select among options on a communication board, but this communication can be slow, effortful, and may require caregiver intervention. Recent advances in implantable brain-computer interfaces (BCIs) have demonstrated the feasibility of establishing and maintaining communication using a variety of direct brain control strategies that bypass weak muscles, for example to control a switch scanner 3 , 4 , a computer cursor 5 , to write letters 6 or to spell words using a hybrid approach of eye-tracking and attempted movement detection 7 . However, these communication modalities are still slower, more effortful, and less intuitive than speech-based BCI control 8 .

Recent studies have also explored the feasibility of decoding attempted speech from brain activity, outputting text or even acoustic speech, which could potentially carry more linguistic information such as intonation and prosody. Previous studies have reconstructed acoustic speech in offline analysis from linear regression models 9 , convolutional 10 and recurrent neural networks 11 , 12 , and encoder-decoder architectures 13 . Concatenative approaches from the text-to-speech synthesis domain have also been explored 14 , 15 , and voice activity has been identified in electrocorticographic (ECoG) 16 and stereotactic EEG recordings 17 . Moreover, speech decoding has been performed at the level of American English phonemes 18 , spoken vowels 19 , 20 , spoken words 21 and articulatory gestures 22 , 23 .

Until now, brain-to-speech decoding has primarily been reported in individuals with unimpaired speech, such as patients temporarily implanted with intracranial electrodes for epilepsy surgery. To date, it is unclear to what extent these findings will ultimately translate to individuals with motor speech impairments, as in ALS and other neurological disorders. Recent studies have demonstrated how neural activity acquired from an ECoG grid 24 or from microelectrodes 25 can be used to recover text from a patient with anarthria due to a brainstem stroke, or from a patient with dysarthria due to ALS, respectively. Prior to these studies, a landmark study allowed a locked-in volunteer to control a real-time synthesizer generating vowel sounds 26 . More recently, Metzger et al. 27 demonstrated in a clinical trial participant diagnosed with quadriplegia and anarthria a multimodal speech-neuroprosthetic system that was capable of synthesizing sentences in a cued setting from silent speech attempts. In our prior work, we presented a ‘plug-and-play’ system that allowed a clinical trial participant living with ALS to issue commands to external devices, such as a communication board, by using speech as a control mechanism 28 .

In related work, BCIs based on non-invasive modalities, such as electroencephalography (EEG), functional near-infrared spectroscopy (fNIRS) or functional magnetic resonance imaging (fMRI) have been investigated for speech decoding applications. These studies have largely focused on imagined speech 29 to avoid contamination by movement artifacts 30 . Recent work by Dash et al., for example, reported speech decoding results for imagined and spoken phrases from 3 ALS patients using magnetoencephalography (MEG) 31 . While speech decoding based on non-invasive methodologies is an important branch in the BCI field as they do not require a surgery and may be adopted by a larger population more easily, their current state of the art comes with disadvantages compared to implantable BCI’s as they lack either temporal or spatial resolution, or are currently not feasible for being used at home.

Here, we show that an individual living with ALS and participating in a clinical trial of an implantable BCI (ClinicalTrials.gov, NCT03567213) was able to produce audible, intelligible words that closely resembled his own voice, spoken at his own pace. Speech synthesis was accomplished through online decoding of ECoG signals generated during overt speech production from cortical regions previously shown to represent articulation and phonation, following similar previous work 11 , 19 , 32 , 33 . Our participant had considerable impairments in articulation and phonation. He was still able to produce some words that were intelligible when spoken in isolation, but his sentences were often unintelligible. Here, we focused on a closed vocabulary of 6 keywords, originally used for decoding spoken commands to control a communication board. Our participant was capable of producing these 6 keywords individually with a high degree of intelligibility. We acquired training data over a period of 6 weeks and deployed the speech synthesis BCI in several separate closed-loop sessions. Since the participant could still produce speech, we were able to easily and reliably time-align the individual’s neural and acoustic signals to enable a mapping between his cortical activity during overt speech production processes and his voice’s acoustic features. We chose to provide delayed rather than simultaneous auditory feedback in anticipation of ongoing deterioration in the patient’s speech due to ALS, with increasing discordance and interference between actual and BCI-synthesized speech. This design choice would be ideal for a neuroprosthetic device that remains capable of producing intelligible words as an individual’s speech becomes increasingly unintelligible, as was expected in our participant due to ALS.

Here, we present a self-paced BCI that translates brain activity directly to acoustic speech that resembles characteristics of the user’s voice profile, with most synthesized words of sufficient intelligibility to be correctly recognized by human listeners. This work makes an important step in adding more evidence that recent speech synthesis from neural signals in patients with intact speech can be translated to individuals with neurological speech impairments, by first focusing on a closed vocabulary that the participant can reliably generate at his own pace, before generalizing towards unseen words. Synthesizing speech from the neural activity associated with overt speech allowed us to demonstrate the feasibility of reproducing the acoustic features of speech when ground truth is available and its alignment with an acoustic target is straightforward, in turn setting a standard for future efforts when ground truth is unavailable, as in the Locked In Syndrome. Moreover, because our speech synthesis model was trained on data that preceded testing by several months, our results also support the stability of ECoG as a basis for speech BCIs.

In order to synthesize acoustic speech from neural signals, we designed a pipeline that consisted of three recurrent neural networks (RNNs) to (1) identify and buffer speech-related neural activity, (2) transform sequences of speech-related neural activity into an intermediate acoustic representation, and (3) eventually recover the acoustic waveform using a vocoder. Figure  1 shows a schematic overview of our approach. We acquired ECoG signals from two electrode grids that covered cortical representations for speech production including ventral sensorimotor cortex and the dorsal laryngeal area (Fig.  1 A). Here, we focused only on a subset of electrodes that had previously been identified as showing significant changes in high-gamma activity associated with overt speech production (see Supplementary Fig.  2 ). From the raw ECoG signals, our closed-loop speech synthesizer extracted broadband high-gamma power features (70–170 Hz) that had previously been demonstrated to encode speech-related information useful for decoding speech (Fig.  1 B) 10 , 14 .

figure 1

Overview of the closed-loop speech synthesizer. ( A ) Neural activity is acquired from a subset of 64 electrodes (highlighted in orange) from two 8 × 8 ECoG electrode arrays covering sensorimotor areas for face and tongue, and for upper limb regions. ( B ) The closed-loop speech synthesizer extracts high-gamma features to reveal speech-related neural correlates of attempted speech production and propagates each frame to a neural voice activity detection (nVAD) model ( C ) that identifies and extracts speech segments ( D ). When the participant finishes speaking a word, the nVAD model forwards the high-gamma activity of the whole extracted sequence to a bidirectional decoding model ( E ) which estimates acoustic features ( F ) that can be transformed into an acoustic speech signal. ( G ) The synthesized speech is played back as acoustic feedback.

We used a unidirectional RNN to identify and buffer sequences of high-gamma activity frames and extract speech segments (Fig.  1 C,D). This neural voice activity detection (nVAD) model internally employed a strategy to correct misclassified frames based on each frame's temporal context, and additionally included a context window of 0.5 s to allow for smoother transitions between speech and non-speech frames. Each buffered sequence was forwarded to a bidirectional decoding model that mapped high-gamma features onto 18 Bark-scale cepstral coefficients 34 and 2 pitch parameters, henceforth referred to as LPC coefficients 35 , 36 (Fig.  1 E,F). We used a bidirectional architecture to include past and future information while making frame-wise predictions. Estimated LPC coefficients were transformed into an acoustic speech signal using the LPCNet vocoder 36 and played back as delayed auditory feedback (Fig.  1 G).

Synthesis performance

When deployed in sessions with the participant for online decoding, our speech-synthesis BCI was reliably capable of producing acoustic speech that captured many details and characteristics of the voice and pacing of the participant’s natural speech, often yielding a close resemblance to the words spoken in isolation from the participant. Figure  2 A provides examples of original and synthesized waveforms for a representative selection of words time-aligned by subtracting the duration of the extracted speech segment from the nVAD. Onset timings from the reconstructed waveforms indicate that the decoding model captured the flow of the spoken word while also synthesizing silence around utterances for smoother transitions. A comparison between voice activity for spoken and synthesized speech revealed a median Levenstein distance of 235 ms, hinting that the synthesis approach was capable of generating speech that adequately matched the timing of the spoken counterpart. Figure  2 B shows the corresponding acoustic spectrograms for the spoken and synthesized words, respectively. The spectral structures of the original and synthesized speech shared many common characteristics and achieved average correlation scores of 0.67 (± 0.18 standard deviation) suggesting that phoneme and formant-specific information were preserved.

figure 2

Evaluation of the synthesized words. ( A ) Visual example of time-aligned original and reconstructed acoustic speech waveforms and their spectral representations ( B ) for 6 words that were recorded during one of the closed-loop sessions. Speech spectrograms are shown between 100 and 8000 Hz with a logarithmic frequency range to emphasize formant frequencies. ( C ) The confusion matrix between human listeners and ground truth. ( D ) Distribution of accuracy scores from all who performed the listening test for the synthesized speech samples. Dashed line shows chance performance (16.7%).

We conducted 3 sessions across 3 different days (approximately 5 and a half months after the training data was acquired, each session lasted 6 min) to repeat the experiment with acoustic feedback from the BCI to the participant (see Supplementary Video 1 for an excerpt). Other experiment parameters were not changed. All synthesized words were played back on loudspeakers while simultaneously recorded for evaluation.

To assess the intelligibility of the synthesized words, we conducted listening tests in which human listeners played back individual samples of the synthesized words and selected the word that most closely resembled each sample. Additionally, we mixed in samples that contained the originally spoken words. This allowed us to assess the quality of the participant’s natural speech. We recruited a cohort of 21 native English speakers to listen to all samples that were produced during our 3 closed-loop sessions. Out of 180 samples, we excluded 2 words because the nVAD model did not detect speech activity and therefore no speech output was produced by the decoding model. We also excluded a few cases where speech activity was falsely detected by the nVAD model, which resulted in synthesized silence and remained unnoticed to the participant.

Overall, human listeners achieved an accuracy score of 80%, indicating that the majority of synthesized words could be correctly and reliably recognized. Figure  2 C presents the confusion matrix regarding only the synthesized samples where the ground truth labels and human listener choices are displayed on the X- and Y-axes respectively. The confusion matrix shows that human listeners were able to recognize all but one word at very high rates. “Back” was recognized at low rates, albeit still above chance, and was most often mistaken for “Left”. This could have been due in part to the close proximity of the vowel formant frequencies for these two words. The participant’s weak tongue movements may have deemphasized the acoustic discriminability of these words, in turn resulting in the vocoder synthesizing a version of “back” that was often indistinct from “left”. In contrast, the confusion matrix also shows that human listeners were confident in distinguishing the words “Up” and “Left”. The decoder synthesized an intelligible but incorrect word in only 4% of the cases, and all listeners accurately recognized the incorrect word. Note that all keywords in the vocabulary were chosen for intuitive command and control of a computer interface, for example a communication board, and were not designed to be easily discriminable for BCI applications.

Figure  2 D summarizes individual accuracy scores from all human listeners from the listening test in a histogram. All listeners recognized between 75 and 84% of the synthesized words. All human listeners achieved accuracy scores above chance (16.7%). In contrast, when tested on the participant’s natural speech, our human listeners correctly recognized almost all samples of the 6 keywords (99.8%).

Anatomical and temporal contributions

In order to understand which cortical areas contributed to identification of speech segments, we conducted a saliency analysis 37 to reveal the underlying dynamics in high-gamma activity changes that explain the binary decisions made by our nVAD model. We utilized a method from the image processing domain 38 that queries spatial information indicating which pixels have contributed to a classification task. In our case, this method ranked individual high-gamma features over time by their influence on the predicted speech onsets (PSO). We defined the PSO as the first occurrence when the nVAD model identified spoken speech and neural data started to get buffered before being forwarded to the decoding model. The absolute values of their gradients allowed interpretations of which contributions had the highest or lowest impact on the class scores from anatomical and temporal perspectives.

The general idea is illustrated in Fig.  3 B. In a forward pass, we first estimated for each trial the PSO by propagating through each time step until the nVAD model made a positive prediction. From here, we then applied backpropagation through time to compute all gradients with respect to the model’s input high-gamma features. Relevance scores |R| were computed by taking the absolute value of each partial derivative and the maximum value across time was used as the final score for each electrode 38 . Note that we only performed backpropagation through time for each PSO, and not for whole speech segments.

figure 3

Changes in high-gamma activity across motor, premotor and somatosensory cortices trigger detection of speech output. ( A ) Saliency analysis shows that changes in high-gamma activity predominantly from 300 to 100 ms prior to predicted speech onset (PSO) strongly influenced the nVAD model’s decision. Electrodes covering motor, premotor and somatosensory cortices show the impact of model decisions, while electrodes covering the dorsal laryngeal area only modestly added information to the prediction. Grey electrodes were either not used, bad channels or had no notable contributions. ( B ) Illustration of the general procedure on how relevance scores were computed. For each time step t , relevance scores were computed by backpropagation through time across all previous high-gamma frames X t . Predictions of 0 correspond to no-speech, while 1 represents speech frames. ( C ) Temporal progression of mean magnitudes of the absolute relevance score in 3 selected channels that strongly contributed to PSOs. Shaded areas reflect the standard error of the mean (N = 60). Units of the relevance scores are in 10 –3 .

Results from the saliency analysis are shown in Fig.  3 A. For each channel, we display the PSO-specific relevance scores by encoding the maximum magnitude of the influence in the size of the circles (bigger circles mean stronger influence on the predictions), and the temporal occurrence of that maximum in the respective color coding (lighter electrodes have their maximal influence on the PSO earlier). The color bar at the bottom limits the temporal influence to − 400 ms prior to PSO, consistent with previous reports about speech planning 39 and articulatory representations 19 . The saliency analysis showed that the nVAD model relied on a broad network of electrodes covering motor, premotor and somatosensory cortices whose collective changes in the high-gamma activity were relevant for identifying speech. Meanwhile, voice activity information encoded in the dorsal laryngeal area (highlighted electrodes in the upper grid in Fig.  3 A) 19 only mildly contributed to the PSO.

Figure  3 C shows relevance scores over a time period of 1 s prior to PSO for 3 selected electrodes that strongly contributed to predicting speech onsets. In conjunction with the color coding from Fig.  3 A, the temporal associations were consistent with previous studies that examined phoneme decoding over fixed window sizes of 400 ms 18 and 500 ms 40 , 41 around speech onset times, suggesting that the nVAD model benefited from neural activity during speech planning and phonological processing 39 when identifying speech onset. We hypothesize that the decline in the relevance scores after − 200 ms can be explained by the fact that voice activity information might have already been stored in the long short-term memory of the nVAD model and thus changes in neural activity beyond this time had less influence on the prediction.

Here we demonstrate the feasibility of a closed-loop BCI that is capable of online synthesis of intelligible words using intracranial recordings from the speech cortex of an ALS clinical trial participant. Recent studies 10 , 11 , 13 , 27 suggest that deep learning techniques are a viable tool to reconstruct acoustic speech from ECoG signals. We found an approach consisting of three consecutive RNN architectures that identify and transform neural speech correlates into an acoustic waveform that can be streamed over the loudspeaker as neurofeedback, resulting in an 80% intelligibility score on a closed-vocabulary, keyword reading task.

The majority of human listeners were able to correctly recognize most synthesized words. All words from the closed vocabulary were chosen for a prior study 28 that explored speech decoding for intuitive control of a communication board rather than being constructed to elicit discriminable neural activity that benefits decoder performance. The listening tests suggest that the words “Left” and “Back” were responsible for the majority of misclassified words. These words share very similar articulatory features, and our participant’s speech impairments likely made these words less discriminable in the synthesis process.

Saliency analysis showed that our nVAD approach used information encoded in the high-gamma band across predominantly motor, premotor and somatosensory cortices, while electrodes covering the dorsal laryngeal area only marginally contributed to the identification of speech onsets. In particular, neural changes previously reported to be important for speech planning and phonological processing 19 , 39 appeared to have a profound impact. Here, the analysis indicates that our nVAD model learned a proper representation of spoken speech processes, providing a connection between neural patterns learned by the model and the spatio-temporal dynamics of speech production.

Our participant was chronically implanted with 128 subdural ECoG electrodes, roughly half of which covered cortical areas where similar high-gamma responses have been reliably elicited during overt speech 18 , 19 , 40 , 42 and have been used for offline decoding and reconstruction of speech 10 , 11 . This study and others like it 24 , 27 , 43 , 44 explored the potential of ECoG-based BCIs to augment communication for individuals with motor speech impairments due to a variety of neurological disorders, including ALS and brainstem stroke. A potential advantage of ECoG for BCI is the stability of signal quality over long periods of time 45 . In a previous study of an individual with locked-in syndrome due to ALS, a fully implantable ECoG BCI with fewer electrodes provided a stable switch for a spelling application over a period of more than 3 years 46 . Similarly, Rao et al. reported robust responses for ECoG recordings over the speech-auditory cortex for two drug-resistant epilepsy patients over a period of 1.5 years 47 . More recently, we showed that the same clinical trial participant could control a communication board with ECoG decoding of self-paced speech commands over a period of 3 months without retraining or recalibration 28 . The speech synthesis approach we demonstrated here used training data from five and a half months prior to testing and produced similar results over 3 separate days of testing, with recalibration but no retraining in each session. These findings suggest that the correspondence between neural activity in ventral sensorimotor cortex and speech acoustics were not significantly changed over this time period. Although longitudinal testing over longer time periods will be needed to explicitly test this, our findings provide additional support for the stability of ECoG as a BCI signal source for speech synthesis.

Our approach used a speech synthesis model trained on neural data acquired during overt speech production. This constrains our current approach to patients with speech motor impairments in which vocalization is still possible and in which speech may still be intelligible. Given the increasing use of voice banking among people living with ALS, it may also be possible to improve the intelligibility of synthetic speech using an approach similar to ours, even in participants with unintelligible or absent speech. This speech could be utilized as a surrogate but would require careful alignment to speech attempts. Likewise, the same approach could be used with a generic voice, though this would not preserve the individual’s speech characteristics. Here our results were achieved without the added challenge of absent ground truth, but they serve as an important demonstration that if adequate alignment is achieved, direct synthesis of acoustic speech from ECoG is feasible, accurate, and stable, even in a person with dysarthria due to ALS. Nevertheless, it remains to be seen how long our approach will continue to produce intelligible speech as our patient’s neural responses and articulatory impairments change over time due to ALS. Previous studies of long-term ECoG signal stability and BCI performance in patients with more severe motor impairments suggest that this may be possible 3 , 48 .

Although our approach allowed for online, closed-loop production of synthetic speech that preserved our participant’s individual voice characteristics, the bidirectional LSTM imposed a delay in the audible feedback until after the patient spoke each word. We considered this delay to be not only acceptable, but potentially desirable, given our patient’s speech impairments and the likelihood of these impairments worsening in the future due to ALS. Although normal speakers use immediate acoustic feedback to tune their speech motor output 49 , individuals with progressive motor speech impairments are likely to reach a point at which there is a significant, and distracting, mismatch between the subject’s speech and the synthetic speech produced by the BCI. In contrast, providing acoustic feedback immediately after each utterance gives the user clear and uninterrupted output that they can use to improve subsequent speech attempts, if necessary.

While our results are promising, the approach used here did not allow for synthesis of unseen words. The bidirectional architecture of the decoding model learned variations of the neural dynamics of each word and was capable of recovering their acoustic representations from corresponding sequences of high-gamma frames. This approach did not capture more fine-grained and isolated part-of-speech units, such as syllables or phonemes. However, previous research 11 , 27 has shown that speech synthesis approaches based on bidirectional architectures can generalize to unseen elements that were not part of the training set. Future research will be needed to expand the limited vocabulary used here, and to explore to what extent similar or different approaches are able to extrapolate to words that are not in the vocabulary of the training set.

Our demonstration here builds on previous seminal studies of the cortical representations for articulation and phonation 19 , 32 , 40 in epilepsy patients implanted with similar subdural ECoG arrays for less than 30 days. These studies and others using intraoperative recordings have also supported the feasibility of producing synthetic speech from ECoG high-gamma responses 10 , 11 , 33 , but these demonstrations were based on offline analysis of ECoG signals that were previously recorded in subjects with normal speech, with the exception of the work by Metzger et al. 27 Here, a participant with impaired articulation and phonation was able to use a chronically implanted investigational device to produce acoustic speech that retained his unique voice characteristics. This was made possible through online decoding of ECoG high-gamma responses, using an algorithm trained on data collected months before. Notwithstanding the current limitations of our approach, our findings here provide a promising proof-of-concept that ECoG BCIs utilizing online speech synthesis can serve as alternative and augmentative communication devices for people living with ALS. Moreover, our findings should motivate continued research on the feasibility of using BCIs to preserve or restore vocal communication in clinical populations where this is needed.

Materials and methods

Participant.

Our participant was a male native English speaker in his 60s with ALS who was enrolled in a clinical trial (NCT03567213), approved by the Johns Hopkins University Institutional Review Board (IRB) and by the FDA (under an investigational device exemption) to test the safety and preliminary efficacy of a brain-computer interface composed of subdural electrodes and a percutaneous connection to external EEG amplifiers and computers. All experiments conducted in this study complied with all relevant guidelines and regulations, and were performed according to a clinical trial protocol approved by the Johns Hopkins IRB. Diagnosed with ALS 8 years prior to implantation, our participant’s motor impairments had chiefly affected bulbar and upper extremity muscles and had resulted in motor impairments sufficient to render continuous speech mostly unintelligible (though individual words were intelligible), and to require assistance with most activities of daily living. Our participant’s ability to carry out activities of daily living were assessed using the ALSFRS-R measure 50 , resulting in a score of 26 out of 48 possible points (speech was rated at 1 point, see Supplementary Data S5 ). Furthermore, speech intelligibility and speaking rate were evaluated by a certified speech-language pathologist, whose detailed assessment may be found in the Supplementary Note . The participant gave informed consent after being counseled about the nature of the research and implant-related risks and was implanted with the study device in July 2022. Additionally, the participant gave informed consent for use of his audio and video recordings in publications of the study results.

Study device and implantation

The study device was composed of two 8 × 8 subdural electrode grids (PMT Corporation, Chanhassen, MN) connected to a percutaneous 128-channel Neuroport pedestal (Blackrock Neurotech, Salt Lake City, UT). Both subdural grids contained platinum-iridium disc electrodes (0.76 mm thickness, 2-mm diameter exposed surface) with 4 mm center-to-center spacing and a total surface area of 12.11 cm 2 (36.6 mm × 33.1 mm).

The study device was surgically implanted during a standard awake craniotomy with a combination of local anesthesia and light sedation, without neuromuscular blockade. The device’s ECoG grids were placed on the pial surface of sensorimotor representations for speech and upper extremity movements in the left hemisphere. Careful attention was made to assure that the scalp flap incision was well away from the external pedestal. Cortical representations were targeted using anatomical landmarks from pre-operative structural (MRI) and functional imaging (fMRI), in addition to somatosensory evoked potentials measured intraoperatively. Two reference wires attached to the Neuroport pedestal were implanted in the subdural space on the outward facing surface of the subdural grids. The participant was awoken during the craniotomy to confirm proper functioning of the study device and final placement of the two subdural grids. For this purpose, the participant was asked to repeatedly speak a single word as event-related ECoG spectral responses were noted to verify optimal placement for the implanted electrodes. On the same day, the participant had a post-operative CT which was then co-registered to a pre-operative MRI to verify the anatomical locations of the two grids.

Data recording

During all training and testing sessions, the Neuroport pedestal was connected to a 128-channel NeuroPlex-E headstage that was in turn connected by a mini-HDMI cable to a NeuroPort Biopotential Signal Processor (Blackrock Neurotech, Salt Lake City, UT, USA) and external computers. We acquired neural signals at a sampling rate of 1000 Hz.

Acoustic speech was recorded through an external microphone (BETA® 58A, SHURE, Niles, IL) in a room isolated from external acoustic and electronic noise, then amplified and digitized by an external audio interface (H6-audio-recorder, Zoom Corporation, Tokyo, Japan). The acoustic speech signal was split and forwarded to: (1) an analog input of the NeuroPort Biopotential Signal Processor (NSP) to be recorded at the same frequency and in synchrony with the neural signals, and (2) the testing computer to capture high-quality (48 kHz) recordings. We applied cross-correlation to align the high-quality recordings with the synchronized audio signal from the NSP.

Experiment recordings and task design

Each recording day began with a syllable repetition task to acquire cortical activity to be used for baseline normalization. Each syllable was audibly presented through a loudspeaker, and the participant was instructed to recite the heard stimulus by repeating it aloud. Stimulus presentation lasted for 1 s, and trial duration was set randomly in the range of 2.5 s and 3.5 s with a step size of 80 ms. In the syllable repetition task, the participant was instructed to repeat 12 consonant–vowel syllables (Supplementary Table S4 ), in which each syllable was repeated 5 times. We extracted high-gamma frames from all trials to compute for each day the mean and standard deviation statistics for channel-specific normalization.

To collect data for training our nVAD and speech decoding model, we recorded ECoG during multiple blocks of a speech production task over a period of 6 weeks. During the task, the participant read aloud single words that were prompted on a computer screen, interrupted occasionally by a silence trial in which the participant was instructed to say nothing. The words came from a closed vocabulary of 6 words ("Left", "Right", "Up", "Down", "Enter", "Back", and “…” for silence) that were chosen for a separate study in which these spoken words were decoded from ECoG to control a communication board 28 . In each block, there were ten repetitions of each word (60 words in total) that appeared in a pseudo-randomized order by having a fixed set of seeds to control randomization orders. Each word was shown for 2 s per trial with an intertrial interval of 3 s. The participant was instructed to read the prompted word aloud as soon as it appeared. Because his speech was slow, effortful, and dysarthric, the participant may have sometimes used some of the intertrial interval to complete word production. However, offline analysis verified at least 1 s between the end of each spoken word and the beginning of the next trial, assuring that enough time had passed to avoid ECoG high-gamma responses leaking into subsequent trials. In each block, neural signals and audibly vocalized speech were acquired in parallel and stored to disc using BCI2000 51 .

We recorded training, validation, and test data for 10 days, and deployed our approach for synthesizing speech online five and a half months later. During the online task, the synthesized output was played to the participant while he performed the same keyword reading task as in the training sessions. The feedback from each synthesized word began after he spoke the same word, avoiding any interference with production from the acoustic feedback. The validation dataset was used for finding appropriate hyperparameters to train both nVAD and the decoding model. The test set was used to validate final model generalizability before online sessions. We also used the test set for the saliency analysis. In total, the training set was comprised of 1570 trials that aggregated to approximately 80 min of data (21.8 min are pure speech), while the validation and test set contained 70 trials each with around 3 min of data (0.9 min pure speech). The data in each of these datasets were collected on different days, so that no baseline or other statistics in the training set leaked into the validation or test set.

Signal processing and feature extraction

Neural signals were transformed into broadband high-gamma power features that have been previously reported to closely track the timing and location of cortical activation during speech and language processes 42 , 52 . In this feature extraction process, we first re-referenced all channels within each 64-contact grid to a common-average reference (CAR filtering), excluding channels with poor signal quality in any training session. Next, we selected all channels that had previously shown significant high-gamma responses during the syllable repetition task described above. This included 64 channels (Supplementary Fig. S2 , channels with blue outlines) across motor, premotor and somatosensory cortices, including the dorsal laryngeal area. From here, we applied two IIR Butterworth filters (both with filter order 8) to extract the high-gamma band in the range of 70 to 170 Hz while subsequently attenuating the first harmonic (118–122 Hz) of the line noise. For each channel, we computed logarithmic power features based on windows with a fixed length of 50 ms and a frameshift of 10 ms. To estimate speech-related increases in broadband high-gamma power, we normalized each feature by the day-specific statistics of the high-gamma power features accumulated from the syllable repetition task.

For the acoustic recordings of the participant’s speech, we downsampled the time-aligned high-quality microphone recordings from 48 to 16 kHz. From here, we padded the acoustic data by 16 ms to account for the shift introduced by the two filters on the neural data and estimated the boundaries of speech segments using an energy-based voice activity detection algorithm 53 . Likewise, we computed acoustic features in the LPC coefficient space through the encoding functionality of the LPCNet vocoder. Both voice activity detection and LPC feature encoding were configured to operate on 10 ms frameshifts to match the number of samples from the broadband high-gamma feature extraction pipeline.

Network architectures

Our proposed approach relied on three recurrent neural network architectures: (1) a unidirectional model that identified speech segments from the neural data, (2) a bidirectional model that translated sequences of speech-related high-gamma activity into corresponding sequences of LPC coefficients representing acoustic information, and (3) LPCNet 36 , which converted those LPC coefficients into an acoustic speech signal.

The network architecture of the unidirectional nVAD model was inspired by Zen et al. 54 in using a stack of two LSTM layers with 150 units each, followed by a linear fully connected output layer with two units representing speech or non-speech class target logits (Fig.  4 ). We trained the unidirectional nVAD model using truncated backpropagation through time (BPTT) 55 to keep the costs of single parameter updates manageable. We initialized this algorithm’s hyperparameters k 1 and k 2 to 50 and 100 frames of high-gamma activity, respectively, such that the unfolding procedure of the backpropagation step was limited to 100 frames (1 s) and repeated every 50 frames (500 ms). Dropout was used as a regularization method with a probability of 50% to counter overfitting effects 56 . Comparison between predicted and target labels was determined by the cross-entropy loss. We limited the network training using an early stopping mechanism that evaluated after each epoch the network performance on a held-out validation set and kept track of the best model weights by storing the model weights only when the frame-wise accuracy score was bigger than before. The learning rate of the stochastic gradient descent optimizer was dynamically adjusted in accordance with the RMSprop formula 57 with an initial learning rate of 0.001. Using this procedure, the unidirectional nVAD model was trained for 27,975 update steps, achieving a frame-wise accuracy of 93.4% on held-out validation data. The architecture of the nVAD model had 311,102 trainable weights.

figure 4

System overview of the closed-loop architecture. The computational graph is designed as a directed acyclic network. Solid shapes represent ezmsg units, dotted ones represent initialization parameters. Each unit is responsible for a self-contained task and distributes their output to all its subscribers. Logger units run in separate processes to not interrupt the main processing chain for synthesizing speech.

The network architecture of the bidirectional decoding model had a very similar configuration to the unidirectional nVAD but employed a stack of bidirectional LSTM layers for sequence modelling 11 to include past and future contexts. Since the acoustic space of the LPC components was continuous, we used a linear fully connected output layer for this regression task. Figure  4 contains an illustration of the network architecture of the decoding model. In contrast to the unidirectional nVAD model, we used standard BPTT to account for both past and future contexts within each extracted segment identified as spoken speech. The architecture of the decoding model had 378,420 trainable weights and was trained for 14,130 update steps using a stochastic gradient descent optimizer. The initial learning rate was set to 0.001 and dynamically updated in accordance with the RMSProp formula. Again, we used dropout with a 50% probability and employed an early stopping mechanism that only updated model weights when the loss on the held-out validation set was lower than before.

Both the unidirectional nVAD and the bidirectional decoding model were implemented within the PyTorch framework. For LPCNet, we used the C-implementation and pretrained model weights by the original authors and communicated with the library via wrapper functions through the Cython programming language.

Closed-loop architecture

Our closed-loop architecture was built upon ezmsg, a general-purpose framework which enables the implementation of streaming systems in the form a directed acyclic network of connected units, which communicate with each other through a publish/subscribe software engineering pattern using asynchronous coroutines. Here, each unit represents a self-contained operation which receives many inputs, and optionally propagates its output to all its subscribers. A unit consists of a settings and state class for enabling initial and updatable configurations and has multiple input and output connection streams to communicate with other nodes in the network. Figure  4 shows a schematic overview of the closed-loop architecture. ECoG signals were received by connecting to BCI2000 via a custom ZeroMQ (ZMQ) networking interface that sent packages of 40 ms over the TCP/IP protocol. From here, each unit interacted with other units through an asynchronous message system that was implemented on top of a shared-memory publish-subscribe multi-processing pattern. Figure  4 shows that the closed-loop architecture was comprised of 5 units for the synthesis pipeline, while employing several additional units that acted as loggers and wrote intermediate data to disc.

In order to play back the synthesized speech during closed-loop sessions, we wrote the bytes of the raw PCM waveform to standard output (stdout) and reinterpreted them by piping them into SoX. We implemented our closed-loop architecture in Python 3.10. To keep the computational complexity manageable for this streamlined application, we implemented several functionalities, such as ringbuffers or specific calculations in the high-gamma feature extraction, in Cython.

Contamination analysis

Overt speech production can cause acoustic artifacts in electrophysiological recordings, allowing learning machines such as neural networks to rely on information that is likely to fail once deployed—a phenomenon widely known as Clever Hans 58 . We used the method proposed by Roussel et al. 59 to assess the risk that our ECoG recordings had been contaminated. This method compares correlations between neural and acoustic spectrograms to determine a contamination index which describes the average correlation of matching frequencies. This contamination index is compared to the distribution of contamination indices resulting from randomly permuting the rows and columns of the contamination matrix—allowing statistical analysis of the risk when assuming that no acoustic contamination is present.

For each recording day among the train, test and validation set, we analyzed acoustic contamination in the high-gamma frequency range. We identified 1 channel (Channel 46) in our recordings that was likely contaminated during 3 recording days (D 5 , D 6 , and D 7 ), and we corrected this channel by taking the average of high-gamma power features from neighboring channels (8-neighbour configuration, excluding the bad channel 38). A detailed report can be found in Supplementary Fig. S1 , where each histogram corresponds to the distribution of permuted contamination matrices, and colored vertical bars indicate the actual contamination index, where green and red indicate the statistical criterion threshold (green: p > 0.05, red: p ≤ 0.05). After excluding the neural data from channel 46, Roussel’s method suggested that the null hypothesis could be rejected, and thus we concluded that no acoustic speech has interfered with neural recording.

Listening test

We conducted a forced-choice listening test similar to Herff et al. 14 in which 21 native English speakers evaluated the intelligibility of the synthesized output and the originally spoken words. Listeners were asked to listen to one word at a time and select which word out of the six options most closely resembled it. Here, the listeners had the opportunity to listen to each sample many times before submitting a choice. We implemented the listening test on top of the BeaqleJS framework 60 . All words that were either spoken or synthesized during the 3 closed-loop sessions were included in the listening test, but were randomly sampled from a uniform distribution for unique randomized sequences across listeners. Supplementary Fig. S3 provides a screenshot of the interface with which the listeners were working.

All human listeners were only recruited through indirect means such as IRB-approved flyers placed on campus sites and had no direct connection to the PI. Anonymous demographic data was collected at the end of the listening test asking for age and preferred gender. Overall, recruited participants were 23.8% male and 61.9% female (14% other or preferred not to answer) ranging between 18 to 30 years old.

Statistical analysis

Original and reconstructed speech spectrograms were compared using Pearson's correlation coefficients for 80 mel-scaled spectral bins. For this, we transformed original and reconstructed waveforms into the spectral domain using the short-time Fourier transform (window size: 50 ms, frameshift: 10 ms, window function: Hanning), applied 80 triangular filters to focus only on perceptual differences for human listeners 61 , and Gaussianized the distribution of the acoustic space using the natural logarithm. Pearson correlation scores were calculated for each sample by averaging the correlation coefficients across frequency bins. The 95% confidence interval (two-sided) was used in the feature selection procedure while the z-criterion was Bonferroni corrected across time points. Lower and upper bounds for all channels and time points can be found in the supplementary data . Contamination analysis is based on permutation tests that use t-tests as their statistical criterion with a Bonferroni corrected significance level of α = 0.05/N, where N represents the number of frequency bins multiplied by the number of selected channels.

Overall, we used the SciPy stats package (version 1.10.1) for statistical evaluation, but the contamination analysis has been done in Matlab with the statistics and machine learning toolbox (version 12.4).

Data availability

Neural data and anonymized speech audio are publicly available at http://www.osf.io/49rt7/ . This includes experiment recordings used as training data and experiment runs from our closed-loop sessions. Additionally, we also included supporting data used for rendering the figures in the main text and in the supplementary material.

Code availability

Corresponding source code for the closed-loop BCI and scripts for generating figures can be obtained from the official Crone Lab Github page at: https://github.com/cronelab/delayed-speech-synthesis . This includes source files for training, inference, and data analysis/evaluation. The ezmsg framework can be obtained from https://github.com/iscoe/ezmsg .

Bauer, G., Gerstenbrand, F. & Rumpl, E. Varieties of the locked-in syndrome. J. Neurol. 221 , 77–91 (1979).

Article   CAS   Google Scholar  

Smith, E. & Delargy, M. Locked-in syndrome. BMJ 330 , 406–409 (2005).

Article   PubMed Central   Google Scholar  

Vansteensel, M. J. et al. Fully implanted brain–computer interface in a locked-in patient with ALS. N. Engl. J. Med. 375 , 2060–2066 (2016).

Chaudhary, U. et al. Spelling interface using intracortical signals in a completely locked-in patient enabled via auditory neurofeedback training. Nat. Commun. 13 , 1236 (2022).

Article   ADS   CAS   PubMed Central   Google Scholar  

Pandarinath, C. et al. High performance communication by people with paralysis using an intracortical brain–computer interface. eLife 6 , e18554 (2017).

Willett, F. R., Avansino, D. T., Hochberg, L. R., Henderson, J. M. & Shenoy, K. V. High-performance brain-to-text communication via handwriting. Nature 593 , 249–254 (2021).

Oxley, T. J. et al. Motor neuroprosthesis implanted with neurointerventional surgery improves capacity for activities of daily living tasks in severe paralysis: First in-human experience. J. NeuroInterventional Surg. 13 , 102–108 (2021).

Article   Google Scholar  

Chang, E. F. & Anumanchipalli, G. K. Toward a speech neuroprosthesis. JAMA 323 , 413–414 (2020).

Herff, C. et al. Towards direct speech synthesis from ECoG: A pilot study. In 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) 1540–1543 (2016).

Angrick, M. et al. Speech synthesis from ECoG using densely connected 3D convolutional neural networks. J. Neural Eng. 16 , 036019 (2019).

Article   ADS   PubMed Central   Google Scholar  

Anumanchipalli, G. K., Chartier, J. & Chang, E. F. Speech synthesis from neural decoding of spoken sentences. Nature 568 , 493–498 (2019).

Wairagkar, M., Hochberg, L. R., Brandman, D. M. & Stavisky, S. D. Synthesizing speech by decoding intracortical neural activity from dorsal motor cortex. In 2023 11th International IEEE/EMBS Conference on Neural Engineering (NER) 1–4 (2023).

Kohler, J. et al. Synthesizing speech from intracranial depth electrodes using an encoder-decoder framework. Neurons Behav. Data Anal. Theory https://doi.org/10.51628/001c.57524 (2022).

Herff, C. et al. Generating natural, intelligible speech from brain activity in motor, premotor, and inferior frontal cortices. Front. Neurosci. https://doi.org/10.3389/fnins.2019.01267 (2019).

Wilson, G. H. et al. Decoding spoken English from intracortical electrode arrays in dorsal precentral gyrus. J. Neural Eng. 17 , 066007 (2020).

Kanas, V. G. et al. Joint spatial-spectral feature space clustering for speech activity detection from ECoG signals. IEEE Trans. Biomed. Eng. 61 , 1241–1250 (2014).

Soroush, P. Z., Angrick, M., Shih, J., Schultz, T. & Krusienski, D. J. Speech activity detection from stereotactic EEG. In 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC) 3402–3407 (2021).

Mugler, E. M. et al. Direct classification of all American English phonemes using signals from functional speech motor cortex. J. Neural Eng. 11 , 035015 (2014).

Bouchard, K. E., Mesgarani, N., Johnson, K. & Chang, E. F. Functional organization of human sensorimotor cortex for speech articulation. Nature 495 , 327–332 (2013).

Bouchard, K. E. & Chang, E. F. Neural decoding of spoken vowels from human sensory-motor cortex with high-density electrocorticography. In 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society 6782–6785 (2014).

Kellis, S. et al. Decoding spoken words using local field potentials recorded from the cortical surface. J. Neural Eng. 7 , 056007 (2010).

Mugler, E. M., Goldrick, M., Rosenow, J. M., Tate, M. C. & Slutzky, M. W. Decoding of articulatory gestures during word production using speech motor and premotor cortical activity. In 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) 5339–5342 (2015).

Mugler, E. M. et al. Differential representation of articulatory gestures and phonemes in precentral and inferior frontal gyri. J. Neurosci. 38 , 9803–9813 (2018).

Article   CAS   PubMed Central   Google Scholar  

Moses, D. A. et al. Neuroprosthesis for decoding speech in a paralyzed person with anarthria. N. Engl. J. Med. 385 , 217–227 (2021).

Willett, F. R. et al. A high-performance speech neuroprosthesis. Nature 620 , 1031–1036 (2023).

Guenther, F. H. et al. A wireless brain–machine interface for real-time speech synthesis. PLoS ONE 4 , e8218 (2009).

Metzger, S. L. et al. A high-performance neuroprosthesis for speech decoding and avatar control. Nature 620 , 1037–1046 (2023).

Luo, S. et al. Stable decoding from a speech BCI enables control for an individual with ALS without recalibration for 3 months. Adv. Sci. 10 , 2304853 (2023).

Cooney, C., Folli, R. & Coyle, D. Neurolinguistics research advancing development of a direct-speech brain–computer interface. iScience 8 , 103–125 (2018).

Herff, C. & Schultz, T. Automatic speech recognition from neural signals: A focused review. Front. Neurosci. https://doi.org/10.3389/fnins.2016.00429 (2016).

Dash, D. et al. Neural Speech Decoding for Amyotrophic Lateral Sclerosis , 2782–2786 (2020). https://doi.org/10.21437/Interspeech.2020-3071 .

Chartier, J., Anumanchipalli, G. K., Johnson, K. & Chang, E. F. Encoding of articulatory kinematic trajectories in human speech sensorimotor cortex. Neuron 98 , 1042-1054.e4 (2018).

Akbari, H., Khalighinejad, B., Herrero, J. L., Mehta, A. D. & Mesgarani, N. Towards reconstructing intelligible speech from the human auditory cortex. Sci. Rep. 9 , 874 (2019).

Moore, B. An introduction to the psychology of hearing: Sixth edition. In An Introduction to the Psychology of Hearing (Brill, 2013).

Taylor, P. Text-to-Speech Synthesis (Cambridge University Press, 2009).

Book   Google Scholar  

Valin, J.-M. & Skoglund, J. LPCNET: Improving neural speech synthesis through linear prediction. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 5891–5895 (2019).

Montavon, G., Samek, W. & Müller, K.-R. Methods for interpreting and understanding deep neural networks. Digit. Signal Process. 73 , 1–15 (2018).

Article   MathSciNet   Google Scholar  

Simonyan, K., Vedaldi, A. & Zisserman, A. Deep inside convolutional networks: Visualising image classification models and saliency maps. In International Conference on Learning Representations (ICLR) (2014).

Indefrey, P. the spatial and temporal signatures of word production components: A critical update. Front. Psychol. https://doi.org/10.3389/fpsyg.2011.00255 (2011).

Ramsey, N. F. et al. Decoding spoken phonemes from sensorimotor cortex with high-density ECoG grids. NeuroImage 180 , 301–311 (2018).

Jiang, W., Pailla, T., Dichter, B., Chang, E. F. & Gilja, V. Decoding speech using the timing of neural signal modulation. In 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) 1532–1535 (2016).

Crone, N. E. et al. Electrocorticographic gamma activity during word production in spoken and sign language. Neurology 57 , 2045–2053 (2001).

Moses, D. A., Leonard, M. K., Makin, J. G. & Chang, E. F. Real-time decoding of question-and-answer speech dialogue using human cortical activity. Nat. Commun. 10 , 3096 (2019).

Herff, C. et al. Brain-to-text: Decoding spoken phrases from phone representations in the brain. Front. Neurosci. https://doi.org/10.3389/fnins.2015.00217 (2015).

Morrell, M. J. Responsive cortical stimulation for the treatment of medically intractable partial epilepsy. Neurology 77 , 1295–1304 (2011).

Pels, E. G. M. et al. Stability of a chronic implanted brain–computer interface in late-stage amyotrophic lateral sclerosis. Clin. Neurophysiol. 130 , 1798–1803 (2019).

Rao, V. R. et al. Chronic ambulatory electrocorticography from human speech cortex. NeuroImage 153 , 273–282 (2017).

Silversmith, D. B. et al. Plug-and-play control of a brain–computer interface through neural map stabilization. Nat. Biotechnol. 39 , 326–335 (2021).

Denes, P. B. & Pinson, E. The Speech Chain (Macmillan, 1993).

Google Scholar  

Cedarbaum, J. M. et al. The ALSFRS-R: A revised ALS functional rating scale that incorporates assessments of respiratory function. J. Neurol. Sci. 169 , 13–21 (1999).

Schalk, G., McFarland, D. J., Hinterberger, T., Birbaumer, N. & Wolpaw, J. R. BCI2000: A general-purpose brain-computer interface (BCI) system. IEEE Trans. Biomed. Eng. 51 , 1034–1043 (2004).

Leuthardt, E. et al. Temporal evolution of gamma activity in human cortex during an overt and covert word repetition task. Front. Hum. Neurosci. https://doi.org/10.3389/fnhum.2012.00099 (2012).

Povey, D. et al. The kaldi speech recognition toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Understanding (IEEE Signal Processing Society, 2011).

Zen, H. & Sak, H. Unidirectional long short-term memory recurrent neural network with recurrent output layer for low-latency speech synthesis. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 4470–4474 (2015).

Sutskever, I. Training Recurrent Neural Networks (University of Toronto, 2013).

Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15 , 1929–1958 (2014).

MathSciNet   Google Scholar  

Ruder, S. An overview of gradient descent optimization algorithms. Preprint at https://arxiv.org/abs/1609.04747 (2016).

Lapuschkin, S. et al. Unmasking Clever Hans predictors and assessing what machines really learn. Nat. Commun. 10 , 1096 (2019).

Roussel, P. et al. Observation and assessment of acoustic contamination of electrophysiological brain signals during speech production and sound perception. J. Neural Eng. 17 , 056028 (2020).

Article   ADS   Google Scholar  

Kraft, S. & Zölzer, U. BeaqleJS: HTML5 and JavaScript based framework for the subjective evaluation of audio quality. In Linux Audio Conference (2014).

Stevens, S. S., Volkmann, J. & Newman, E. B. A scale for the measurement of the psychological magnitude pitch. J. Acoust. Soc. Am. 8 , 185–190 (1937).

Download references

Acknowledgements

Research reported in this publication was supported by the National Institute Of Neurological Disorders And Stroke of the National Institutes of Health under Award Number UH3NS114439 (PI N.E.C., co-PI N.F.R.). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Author information

Authors and affiliations.

Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA

Miguel Angrick, Samyak Shah, Kathryn R. Rosenblatt, Lora Clawson, Donna C. Tippett, Nicholas Maragakis & Nathan E. Crone

Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD, USA

Shiyu Luo & Daniel N. Candrea

Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, USA

Qinwan Rabbani

Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory, Laurel, MD, USA

Griffin W. Milsap, Francesco V. Tenore & Matthew S. Fifer

Department of Neurosurgery, The Johns Hopkins University School of Medicine, Baltimore, MD, USA

William S. Anderson & Chad R. Gordon

Section of Neuroplastic and Reconstructive Surgery, Department of Plastic Surgery, The Johns Hopkins University School of Medicine, Baltimore, MD, USA

Chad R. Gordon

Department of Anesthesiology & Critical Care Medicine, The Johns Hopkins University School of Medicine, Baltimore, MD, USA

Kathryn R. Rosenblatt

Department of Otolaryngology-Head and Neck Surgery, The Johns Hopkins University School of Medicine, Baltimore, MD, USA

Donna C. Tippett

Department of Physical Medicine and Rehabilitation, The Johns Hopkins University School of Medicine, Baltimore, MD, USA

Center for Language and Speech Processing, The Johns Hopkins University, Baltimore, MD, USA

Hynek Hermansky

Human Language Technology Center of Excellence, The Johns Hopkins University, Baltimore, MD, USA

UMC Utrecht Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands

Nick F. Ramsey

You can also search for this author in PubMed   Google Scholar

Contributions

M.A. and N.C. wrote the manuscript. M.A., S.L., Q.R. and D.C. analyzed the data. M.A. and S.S. conducted the listening test. S.L. collected the data. M.A. and G.M. implemented the code for the online decoder and the underlying framework. M.A. made the visualizations. W.A., C.G. and K.R., L.C. and N.M. conducted the surgery/medical procedure. D.T. made the speech and language assessment. F.T. handled the regulatory aspects. H.H. supervised the speech processing methodology. M.F. N.R. and N.C. supervised the study and the conceptualization. All authors reviewed and revised the manuscript.

Corresponding authors

Correspondence to Miguel Angrick or Nathan E. Crone .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information..

Supplementary Video 1.

Supplementary Legends.

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Angrick, M., Luo, S., Rabbani, Q. et al. Online speech synthesis using a chronically implanted brain–computer interface in an individual with ALS. Sci Rep 14 , 9617 (2024). https://doi.org/10.1038/s41598-024-60277-2

Download citation

Received : 19 October 2023

Accepted : 21 April 2024

Published : 26 April 2024

DOI : https://doi.org/10.1038/s41598-024-60277-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

words used in indirect speech

Advertisement

Supported by

Congress Passed a Bill That Could Ban TikTok. Now Comes the Hard Part.

President Biden has signed the bill to force a sale of the video app or ban it. Now the law faces court challenges, a shortage of qualified buyers and Beijing’s hostility.

  • Share full article

A crowd of people, all holding signs that support TikTok.

By Sapna Maheshwari and David McCabe

Sapna Maheshwari reported from New York, and David McCabe from Washington.

A bill that would force a sale of TikTok by its Chinese owner, ByteDance — or ban it outright — was passed by the Senate on Tuesday and signed into law Wednesday by President Biden.

Now the process is likely to get even more complicated.

Congress passed the measure citing national security concerns because of TikTok’s Chinese ties. Both lawmakers and security experts have said there are risks that the Chinese government could lean on ByteDance for access to sensitive data belonging to its 170 million U.S. users or to spread propaganda.

The law would allow TikTok to continue to operate in the United States if ByteDance sold it within 270 days, or about nine months, a time frame that the president could extend to a year.

The measure is likely to face legal challenges, as well as possible resistance from Beijing, which could block the sale or export of the technology. It’s also unclear who has the resources to buy TikTok, since it will carry a hefty price tag.

The issue could take months or even years to settle, during which the app would probably continue to function for U.S. consumers.

“It’s going to be a royal mess,” said Anupam Chander, a visiting scholar at the Institute for Rebooting Social Media at Harvard and an expert on the global regulation of new technologies.

TikTok pledged to challenge the law. “Rest assured, we aren’t going anywhere,” its chief executive, Shou Chew, said in a video posted to the platform. “We are confident, and we will keep fighting for your rights in the courts.”

Here’s what to expect next.

TikTok’s Day in Court

TikTok is likely to start by challenging the measure in the courts.

“I think that’s the one certainty: There will be litigation,” said Jeff Kosseff, an associate professor of cybersecurity law at the Naval Academy.

TikTok’s case will probably lean on the First Amendment, legal experts said. The company is expected to argue that a forced sale could violate its users’ free speech rights because a new owner could change the app’s content policies and reshape what users are able to freely share on the platform.

“Thankfully, we have a Constitution in this country, and people’s First Amendment rights are very important,” Michael Beckerman, TikTok’s vice president of public policy, said in an interview with a creator on the platform last week. “We’ll continue to fight for you and all the other users on TikTok.”

Other groups, like the American Civil Liberties Union, which has been a vocal opponent of the bill, may also join the legal fight. A spokeswoman for the A.C.L.U. said on Tuesday that the group was still weighing its role in potential litigation challenging the law.

The government will probably need to make a strong case that ByteDance’s ownership of TikTok makes it necessary to limit speech because of national security concerns, the legal experts said.

TikTok already has a strong record in similar First Amendment battles. When he was president, Donald J. Trump tried to force a sale or ban of the app in 2020, but federal judges blocked the effort because it would have had the effect of shutting down a “platform for expressive activity.” Montana tried to ban TikTok in the state last year because of the app’s Chinese ownership, but a different federal judge ruled against the state law for similar reasons.

Only one narrower TikTok restriction has survived a court challenge. The governor of Texas announced a ban of the app on state government devices and networks in 2022 because of its Chinese ownership and related data privacy concerns. Professors at public universities challenged the ban in court last year, saying it blocked them from doing research on the app. A federal judge upheld the state ban in December, finding it was a “reasonable restriction” in light of Texas’ concerns and the narrow scope affecting only state employees.

Small Buyer Pool

Analysts estimate that the price for the U.S. portion of TikTok could be tens of billions of dollars.

ByteDance itself is one of the world’s most valuable start-ups , with an estimated worth of $225 billion, according to CB Insights, a firm that tracks venture capital and start-ups.

The steep price tag would limit the list of who could afford TikTok. Tech giants like Meta or Google would probably be blocked from an acquisition because of antitrust concerns.

Private equity firms or other investors could form a group to raise enough money to buy TikTok. Former Treasury Secretary Steven Mnuchin said in March that he wanted to build such a group. And anyone who can pony up the money still has to pass muster with the U.S. government, which needs to sign off on any purchase.

Few others have expressed public interest in buying the app.

The last time the government tried to force ByteDance to sell TikTok in 2020, the company held talks with Microsoft and the software company Oracle. (Oracle and Walmart ultimately appeared to reach an agreement with ByteDance, but the deal never materialized .)

A Complicated Divestment

Even if TikTok approaches a sale, the process of separating TikTok from ByteDance is likely to be messy.

The legislation prohibits any connection between ByteDance and TikTok after a sale. Yet TikTok employees use ByteDance software in their communications, and the company’s employees are global, with executives in Singapore, Dublin, Los Angeles and Mountain View, Calif.

It’s unclear if ByteDance would consider selling TikTok’s entire global footprint or just its U.S. operations, where the company has nearly 7,000 employees.

Breaking off just the U.S. portion of TikTok could prove particularly challenging. The app's recommendation algorithm, which figures out what users like and serves up content, is key to the success of the app. But Chinese engineers work on that algorithm, which ByteDance owns.

During Mr. Trump’s attempt to force a sale in 2020, the Chinese government issued export restrictions that appeared to require its regulators to grant permission before ByteDance algorithms could be sold or licensed to outsiders.

The uncertainty around the export of the algorithm and other ByteDance technology could also deter interested buyers.

China’s Unpredictable Role

The Chinese government could also try to block a TikTok sale.

Chinese officials criticized a similar bill after the House passed it in March, although they have not yet said whether they would block a divestment. About a year ago, China’s commerce ministry said it would “firmly oppose” a sale of the app by ByteDance.

Chinese export regulations appear to cover TikTok’s content recommendation algorithm, giving Beijing a say in whether ByteDance could sell or license the app’s most valuable feature.

It “is not a foregone conclusion by any means” that China will allow a sale, said Lindsay Gorman, a senior fellow at the German Marshall Fund who specializes in emerging tech and China.

China may retaliate against American companies. On Friday, China’s Cyberspace Administration asked Apple to remove Meta’s WhatsApp and Threads from its App Store, according to the iPhone manufacturer. The Chinese government cited national security reasons in making the demand.

Sapna Maheshwari reports on TikTok, technology and emerging media companies. She has been a business reporter for more than a decade. Contact her at [email protected] . More about Sapna Maheshwari

David McCabe covers tech policy. He joined The Times from Axios in 2019. More about David McCabe

IMAGES

  1. Direct and Indirect Speech Rules and Examples » Onlymyenglish.com

    words used in indirect speech

  2. 50 examples of direct and indirect speech

    words used in indirect speech

  3. Direct & Indirect Speech, Tenses and Example Sentences

    words used in indirect speech

  4. Direct And Indirect Speech: Verb Tense Changes With Rules & Examples

    words used in indirect speech

  5. 100 Examples of Direct and Indirect Speech

    words used in indirect speech

  6. Direct and Indirect Speech Examples

    words used in indirect speech

VIDEO

  1. Indirect Speech

  2. Direct and indirect Speech

  3. Narration In English!Direct speech! Indirect speech! English Grammar! Reported speech!

  4. Narration In English!Direct speech! Indirect speech! English Grammar! Reported speech!

  5. Reported speech

  6. Direct And Indirect Speech #englishgrammar

COMMENTS

  1. Reported speech: indirect speech

    Reported speech: indirect speech - English Grammar Today - a reference to written and spoken English grammar and usage - Cambridge Dictionary

  2. Indirect Speech Definition and Examples

    In nonfiction writing or journalism, direct speech can emphasize a particular point, by using a source's exact words. Indirect speech is paraphrasing what someone said or wrote. In writing, it functions to move a piece along by boiling down points that an interview source made. Unlike direct speech, indirect speech is not usually placed inside ...

  3. Reported Speech: Important Grammar Rules and Examples • 7ESL

    Reported speech: He asked if he would see me later. In the direct speech example you can see the modal verb 'will' being used to ask a question. Notice how in reported speech the modal verb 'will' and the reporting verb 'ask' are both written in the past tense. So, 'will' becomes 'would' and 'ask' becomes 'asked'.

  4. Direct and Indirect Speech: Useful Rules and Examples

    Differences between Direct and Indirect Speech. Change of Pronouns. Change of Tenses. Change of Time and Place References. Converting Direct Speech Into Indirect Speech. Step 1: Remove the Quotation Marks. Step 2: Use a Reporting Verb and a Linker. Step 3: Change the Tense of the Verb. Step 4: Change the Pronouns.

  5. Reported speech

    I would recommend using answer a) because this is the general pattern used in reported speech. Sometimes the verb in the reported clause can be in the present tense when we are speaking about a situation that is still true, but the reported verb in the past tense can also have the same meaning. Since here the time referred to could be either ...

  6. Reported Speech in English Grammar

    Introduction. In English grammar, we use reported speech to say what another person has said. We can use their exact words with quotation marks, this is known as direct speech, or we can use indirect speech.In indirect speech, we change the tense and pronouns to show that some time has passed.Indirect speech is often introduced by a reporting verb or phrase such as ones below.

  7. Reported speech: statements

    In indirect speech, we often use a tense which is 'further back' in the past (e.g. worked) than the tense originally used (e.g. work). This is called 'backshift'. We also may need to change other words that were used, for example pronouns. Present simple, present continuous and present perfect.

  8. Indirect Speech

    Indirect Speech is a way of expressing the words or utterances of a speaker in a reported manner. In contrast to direct speech, where the original speaker's words are quoted verbatim, indirect speech is more about reporting the essence or meaning of what the speaker said rather than quoting them exactly. For example:

  9. Reported Speech

    Watch my reported speech video: Here's how it works: We use a 'reporting verb' like 'say' or 'tell'. ( Click here for more about using 'say' and 'tell' .) If this verb is in the present tense, it's easy. We just put 'she says' and then the sentence: Direct speech: I like ice cream. Reported speech: She says (that) she likes ice cream.

  10. Indirect speech

    In linguistics, speech or indirect discourse is a grammatical mechanism for reporting the content of another utterance without directly quoting it. For example, the English sentence Jill said she was coming is indirect discourse while Jill said "I'm coming" would be direct discourse.In fiction, the "utterance" might amount to an unvoiced thought that passes through a stream of consciousness ...

  11. Reported Speech

    To change an imperative sentence into a reported indirect sentence, use to for imperative and not to for negative sentences. Never use the word that in your indirect speech. Another rule is to remove the word please. Instead, say request or say. For example: "Please don't interrupt the event," said the host.

  12. Changes in Indirect Speech

    Welcome to a comprehensive tutorial providing guidance on the proper use, types, and rules of indirect speech in English grammar. Indirect speech, also called reported speech, allows us to share another person's exact words without using quotes. It is particularly useful in written language. This tutorial aims to brief you about the changes ...

  13. Direct and Indirect Speech: The Ultimate Guide

    In this example, direct speech is used to convey Sarah's words, and indirect speech is used to convey her friend's response. Using Reported Questions. Reported questions are a form of indirect speech that convey a question someone asked without using quotation marks. Reported questions often use reporting verbs like "asked" or ...

  14. 100 Reported Speech Examples: How To Change Direct Speech ...

    Direct: "I will help you," she promised. Reported: She promised that she would help me. Direct: "You should study harder," he advised. Reported: He advised that I should study harder. Direct: "I didn't take your book," he denied. Reported: He denied taking my book. Direct: "Let's go to the cinema," she suggested.

  15. Direct and Indirect Speech (Grammar Rules and Great Examples)

    Indirect Speech. Indirect speech is usually used to relay what was being said by the speaker without directly quoting the original words. In this case, the tense of the sentence is typically changed. Reporting verbs, such as say, tell, ask, and others, are used as an introduction. The words of the original speaker will not be enclosed inside ...

  16. Reported speech

    Reported speech - English Grammar Today - a reference to written and spoken English grammar and usage - Cambridge Dictionary

  17. Reported Speech

    Reported speech is the form in which one can convey a message said by oneself or someone else, mostly in the past. It can also be said to be the third person view of what someone has said. In this form of speech, you need not use quotation marks as you are not quoting the exact words spoken by the speaker, but just conveying the message. Q2.

  18. What is Reported Speech and How to Use It? with Examples

    Reported speech: He said he would meet me at the park the next day. In this example, the present tense "will" is changed to the past tense "would." 3. Change reporting verbs: In reported speech, you can use different reporting verbs such as "say," "tell," "ask," or "inquire" depending on the context of the speech.

  19. Reporting Verbs: Ultimate List and Useful Examples • 7ESL

    The commonest verbs to introduce the reported speech are: Tell, Say and Ask. Some important aspects about these verbs are that: Tell. Can be followed by THAT, but it can be omitted. Need an indirect object. Example: He told me that she was his wife. Say. Can be followed by THAT, but it can be omitted. Can have an indirect object or not.

  20. Direct To Indirect Speech: Complete Rules With Examples

    Direct speech: Sheila said, "I am meeting my brother tomorrow.". Indirect speech: Sheila said that she was meeting her brother the following day. Here are a few examples of other typical time expressions and how they change: Direct Speech. Indirect Speech. Yesterday. The day before. Now.

  21. Indirect Speech: Formula And Rules

    In direct speech, the speaker most often speaks in the first person. That is, the speaker speaks from his person. John will not talk about himself: John is a good boy. John will say it on his behalf: I am a good boy. But when we retell the words of John (indirect speech), we cannot speak on his behalf.

  22. Direct and Indirect speech: rules and examples

    Note: That is often implied in indirect speech. It is not mandatory to use it, so it is indicated in brackets in this lesson. Introductory verbs. To relate someone's words to both direct and indirect speech, you need an introductory verb. The two most frequent are tell and say, but there are many other possible ones like: ask reply warn answer

  23. Indirect speech

    Questions and imperatives in indirect speech. Download full-size image from Pinterest. We use the normal order of words in reported questions: subject + verb. We don't use an auxiliary verb like do or did. When we report an order or instruction, we use the form ask or tell someone to do something. Pronoun changes in indirect speech

  24. Fact check: Biden repeats his false claim that he 'used to ...

    President Joe Biden has revived a debunked tale about his past - his fictional claim that he used to drive an 18-wheeler truck. Biden has repeatedly embellished or invented biographical tidbits .

  25. Biden just signed a potential TikTok ban into law. Here's what ...

    President Joe Biden signed a bill Wednesday that could lead to a nationwide TikTok ban, escalating a massive threat to the company's US operations. Congress had passed the bill this week as part ...

  26. More Than 100 Columbia University Students Arrested as Protests

    More than 100 students were arrested on Thursday after Columbia University called in the police to empty an encampment of pro-Palestinian demonstrators, fulfilling a vow to Congress by the school ...

  27. Online speech synthesis using a chronically implanted brain ...

    We used a unidirectional RNN to identify and buffer sequences of high-gamma activity frames and extract speech segments (Fig. 1C,D). This neural voice activity detection (nVAD) model internally ...

  28. Biden Signs TikTok Ban Bill Into Law. Here's What Happens Next.

    A bill that would force a sale of TikTok by its Chinese owner, ByteDance — or ban it outright — was passed by the Senate on Tuesday and signed into law Wednesday by President Biden.

  29. Joe Biden recruits Steven Spielberg to 'tell the president's story'

    Joe Biden recruits Steven Spielberg to 'tell the president's story' Hollywood filmmaker and Democrat donor reported to be interested in project praising accomplishments of White House incumbent