.Make sure compatibility along with various platforms, including.NET 6.0,. Web Structure 4.6.2, and.NET Specification 2.0 as well as above.Minimize dependencies to avoid version problems and the requirement for tiing redirects.Transcribing Audio Data.Some of the primary functions of the SDK is audio transcription. Developers may translate audio data asynchronously or in real-time. Below is actually an example of just how to record an audio documents:.utilizing AssemblyAI.utilizing AssemblyAI.Transcripts.var client = brand new AssemblyAIClient(" YOUR_API_KEY").var records = await client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For nearby files, identical code may be used to accomplish transcription.await making use of var stream = brand-new FileStream("./ nbc.mp3", FileMode.Open).var records = await client.Transcripts.TranscribeAsync(.flow,.new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Audio Transcription.The SDK likewise holds real-time sound transcription making use of Streaming Speech-to-Text. This function is especially useful for uses requiring urgent handling of audio data.making use of AssemblyAI.Realtime.await making use of var scribe = brand new RealtimeTranscriber( new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Partial: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Ultimate: transcript.Text "). ).await transcriber.ConnectAsync().// Pseudocode for obtaining sound coming from a microphone as an example.GetAudio( async (part) => wait for transcriber.SendAudioAsync( part)).wait for transcriber.CloseAsync().Using LeMUR for LLM Apps.The SDK incorporates with LeMUR to allow designers to create huge language version (LLM) apps on voice data. Listed here is actually an instance:.var lemurTaskParams = brand new LemurTaskParams.Motivate="Deliver a brief recap of the transcript.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var feedback = wait for client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Intellect Models.In addition, the SDK features integrated support for audio intelligence designs, enabling sentiment review and various other sophisticated attributes.var transcript = await client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = accurate. ).foreach (var result in transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// GOOD, NEUTRAL, or even downside.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").For more details, go to the main AssemblyAI blog.Image resource: Shutterstock.