Start recording when the user speaks. All you have to do is tell us when to start listening. Then we wait for an audible noise and start recording. This is mostly useful for user speech input and the "Start talking now" prompt.
🍕 Virtual tip jar: https://amazon.com/hz/wishlist/ls/EE78A23EEGQB
- You can start recording when sound is detected, or immediately
- Sound stops recording when the user is done talking
- Works with ARC and iOS 5+
Add this to your project using Swift Package Manager. In Xcode that is simply: File > Swift Packages > Add Package Dependency... and you're done. Alternative installation options are shown below for legacy projects.
If you are already using CocoaPods, just add 'FDSoundActivatedRecorder' to your Podfile
then run pod install
.
If you are already using Carthage, just add to your Cartfile
:
github "fulldecent/FDSoundActivatedRecorder" ~> 0.1
Then run carthage update
to build the framework and drag the built FDSoundActivatedRecorder
framework into your Xcode project.
First, install by adding pod 'FDSoundActivatedRecorder', '~> 1.0.0'
to your
Podfile.
Import the project with:
import FDSoundActivatedRecorder
Then begin listening with:
self.recorder = FDSoundActivatedRecorder()
self.recorder.delegate = self
self.recorder.startListening()
A full implementation example is provided in this project.
If your app is in the app store, I would much appreciate if you could add your app to https://www.cocoacontrols.com/controls/fdsoundactivatedrecorder under "Apps using this control" and "I Use This Control".
If you want to use it as a regular recorder, without the ability to of trimming the audio.
- Begin listening:
self.recorder.startListening()
- Begin recording:
self.recorder.startRecording()
- Finally, you can stop recording using the following method:
self.recorder.stopAndSaveRecording()
The full API, from FDSoundActivatedRecorder.swift is copied below:
@objc protocol FDSoundActivatedRecorderDelegate {
/// A recording was triggered or manually started
func soundActivatedRecorderDidStartRecording(recorder: FDSoundActivatedRecorder)
/// No recording has started or been completed after listening for `TOTAL_TIMEOUT_SECONDS`
func soundActivatedRecorderDidTimeOut(recorder: FDSoundActivatedRecorder)
/// The recording and/or listening ended and no recording was captured
func soundActivatedRecorderDidAbort(recorder: FDSoundActivatedRecorder)
/// A recording was successfully captured
func soundActivatedRecorderDidFinishRecording(recorder: FDSoundActivatedRecorder, andSaved file: NSURL)
}
class FDSoundActivatedRecorder : NSObject {
/// A log-scale reading between 0.0 (silent) and 1.0 (loud), nil if not recording
dynamic var microphoneLevel: Double
/// Receiver for status updates
weak var delegate: FDSoundActivatedRecorderDelegate?
/// Listen and start recording when triggered
func startListening()
/// Go back in time and start recording `RISE_TRIGGER_INTERVALS` ago
func startRecording()
/// End the recording and send any processed & saved file to `delegate`
func stopAndSaveRecording()
/// End any recording or listening and discard any recorded files
func abort()
/// This is a PRIVATE method but it must be public because a selector is used in NSTimer (Swift bug)
func interval()
}
This library is tuned for human speech detection using Apple retail iOS devices
in a quiet or noisy environement. You are welcome to tune the audio detection
constants of this program for any special needs you may have. Following is a
technical description of how the algorithm works from
FDSoundActivatedRecorder.swift
.
V Recording
O /-----------\
L / \Fall
U /Rise \
M / \
E -------- --------
Listening Done
- We listen and save audio levels every
INTERVAL
- When several levels exceed the recent moving average by a threshold, we record
- (The exceeding levels are not included in the moving average)
- When several levels deceed the recent moving average by a threshold, we stop recording
- (The deceeding levels are not included in the moving average)
[ YOUR LOGO HERE
]
Please contact github.com@phor.net to discuss adding your company logo above and supporting this project.